CPU Scheduling Concepts

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What is the primary goal of CPU scheduling in operating systems?

  • To minimize memory usage
  • To reduce the number of input/output operations
  • To prevent system crashes
  • To maximize CPU utilization and throughput (correct)

In CPU scheduling, a CPU burst refers to the period when a process is waiting for I/O operations to complete.

False (B)

What is the role of a short-term scheduler (CPU scheduler)?

selects from among the processes in ready queue, and allocates the CPU to one of them

The time it takes for the dispatcher to stop one process and start another is known as ______ latency.

<p>dispatch</p> Signup and view all the answers

Match the following CPU scheduling criteria with their descriptions:

<p>CPU utilization = Keep the CPU as busy as possible. Throughput = Number of processes that complete their execution per time unit. Turnaround time = Amount of time to execute a particular process. Waiting time = Amount of time a process has been waiting in the ready queue.</p> Signup and view all the answers

Which of the following scheduling algorithm optimization criteria aims to minimize the time a process spends waiting in the ready queue?

<p>Min waiting time (C)</p> Signup and view all the answers

In First-Come, First-Served (FCFS) scheduling, the process with the shortest burst time is always executed first.

<p>False (B)</p> Signup and view all the answers

What is the convoy effect in the context of FCFS scheduling?

<p>short process behind long process</p> Signup and view all the answers

The Shortest-Job-First (SJF) scheduling algorithm is optimal because it gives ______ average waiting time for a given set of processes.

<p>minimum</p> Signup and view all the answers

What is a major challenge in implementing the Shortest-Job-First (SJF) scheduling algorithm?

<p>Knowing the length of the next CPU request (D)</p> Signup and view all the answers

Priority scheduling always favors processes with longer CPU burst times.

<p>False (B)</p> Signup and view all the answers

What is the starvation problem in priority scheduling, and how can it be addressed?

<p>low priority processes may never execute; aging – as time progresses increase the priority of the process</p> Signup and view all the answers

In the Round Robin (RR) scheduling algorithm, each process gets a small unit of CPU time known as a(n) ______.

<p>time quantum</p> Signup and view all the answers

In Round Robin scheduling, what happens after a process uses its time quantum?

<p>It is preempted and added to the end of the ready queue. (A)</p> Signup and view all the answers

A large time quantum in Round Robin scheduling effectively makes it behave like First-Come, First-Served (FCFS).

<p>True (A)</p> Signup and view all the answers

Describe how multilevel queue scheduling works.

<p>ready queue is partitioned into separate queues; process permanently in a given queue ;Each queue has its own scheduling algorithm</p> Signup and view all the answers

In multilevel queue scheduling, processes are ______ assigned to a specific queue and do not typically move between queues.

<p>permanently</p> Signup and view all the answers

What is a key characteristic of the Multilevel Feedback Queue scheduling algorithm?

<p>Processes can move between queues based on their CPU burst characteristics. (B)</p> Signup and view all the answers

In symmetric multiprocessing (SMP), only one processor accesses the system data structures to avoid data sharing issues.

<p>False (B)</p> Signup and view all the answers

Define processor affinity in the context of multi-processor scheduling.

<p>process has affinity for processor on which it is currently running</p> Signup and view all the answers

Placing multiple processor cores on the same physical chip is known as ______ processors.

<p>multicore</p> Signup and view all the answers

What is the primary advantage of multicore processors?

<p>Takes advantage of memory stall to make progress on another thread (C)</p> Signup and view all the answers

Soft real-time systems provide a strict guarantee on when a critical real-time process will be scheduled.

<p>False (B)</p> Signup and view all the answers

What are the two types of latencies that affect performance in real-time systems?

<p>Interrupt latency and Dispatch latency</p> Signup and view all the answers

In real-time scheduling, a ______ task requires the CPU at constant intervals.

<p>periodic</p> Signup and view all the answers

In real-time scheduling, the rate of a periodic task is defined as:

<p>The inverse of its period. (D)</p> Signup and view all the answers

Virtualization software is fully aware of the CPU needs and scheduling algorithms of the guest operating systems it hosts.

<p>False (B)</p> Signup and view all the answers

How does virtualization affect scheduling?

<p>Virtualization software schedules multiple guests onto CPU(s); Each guest doing its own scheduling; Not knowing it doesn't own the CPUs</p> Signup and view all the answers

The Completely Fair Scheduler (CFS) in Linux bases its scheduling decisions on the ______ of CPU time each task has received.

<p>proportion</p> Signup and view all the answers

In the Linux Completely Fair Scheduler (CFS), a lower 'nice' value indicates:

<p>A higher priority. (D)</p> Signup and view all the answers

In Linux scheduling, real-time tasks have dynamic priorities that change based on system load.

<p>False (B)</p> Signup and view all the answers

What approach does windows use for scheduling?

<p>priority-based preemptive scheduling</p> Signup and view all the answers

In Windows scheduling, the ______ selects the next thread to run.

<p>dispatcher</p> Signup and view all the answers

In Windows scheduling, real-time threads:

<p>Can preempt non-real-time threads. (A)</p> Signup and view all the answers

An idle thread in Windows is run when there are no run-able threads available.

<p>True (A)</p> Signup and view all the answers

Flashcards

CPU-I/O Burst Cycle

Process execution consists of alternating CPU execution and I/O wait periods.

Dispatch Latency

The time it takes for the dispatcher to stop one process and start another.

CPU utilization

Keep the CPU as busy as possible to maximize resource use.

Throughput

Number of processes that complete their execution per time unit.

Signup and view all the flashcards

Turnaround time

The amount of time it takes to execute a particular process from start to finish.

Signup and view all the flashcards

Waiting time

The amount of time a process has been waiting in the ready queue.

Signup and view all the flashcards

Response time

The time it takes from when a request was submitted until the first response is produced.

Signup and view all the flashcards

Convoy Effect

A process occupies a resource that is needed by first process.

Signup and view all the flashcards

Shortest-Job-First (SJF)

Associate with each process the length of its next CPU burst and schedule the process with the shortest time.

Signup and view all the flashcards

Priority Scheduling

Allocating the CPU to the process with the highest priority.

Signup and view all the flashcards

Starvation

Low priority processes may never execute because of starving.

Signup and view all the flashcards

Aging

As time progresses, increase the priority of a process to combat starvation.

Signup and view all the flashcards

Round Robin (RR)

Each process gets a small unit of CPU time, known as a time quantum.

Signup and view all the flashcards

Multilevel Queue

The ready queue is divided into separate queues and CPU uses scheduling between the queues.

Signup and view all the flashcards

Multilevel Feedback Queue

A process can move between the various queues.

Signup and view all the flashcards

Multiple-Processor Scheduling

CPU scheduling more complex when multiple CPUs are available.

Signup and view all the flashcards

Symmetric Multiprocessing

Each processor is self-scheduling, all processes in common ready queue.

Signup and view all the flashcards

Processor Affinity

Process has affinity for processor on which it is currently running.

Signup and view all the flashcards

Soft real-time systems

No guarantee as to when critical real-time process will be scheduled.

Signup and view all the flashcards

Hard real-time systems

Tasks must be serviced by its deadline to schedule real-time processing.

Signup and view all the flashcards

Interrupt Latency

Time from arrival of interrupt to start of routine that services interrupt.

Signup and view all the flashcards

Dispatch Latency

Time for schedule to take current process off CPU and switch to another process.

Signup and view all the flashcards

Periodic

Processes require CPU at constant intervals to schedule real-time processing.

Signup and view all the flashcards

Virtualization Scheduling

Virtualization software schedules multiple guests onto CPU(s).

Signup and view all the flashcards

Linux Scheduling

Scheduler picks highest priority task in highest scheduling class

Signup and view all the flashcards

vruntime

Scheduler picks task with lowest virtual run time to decide next task.

Signup and view all the flashcards

Target Latency

interval of time during which task should run at least once

Signup and view all the flashcards

Windows Scheduling

highest-priority thread runs next

Signup and view all the flashcards

Study Notes

  • Chapter 6 focuses on CPU scheduling

Basic Concepts

  • CPU utilization is maximized through multiprogramming.
  • Process execution occurs in a CPU-I/O burst cycle.
  • The CPU burst is followed by an I/O burst.
  • CPU burst distribution is a primary concern in scheduling.

CPU Scheduler

  • The short-term scheduler selects a process from the ready queue to allocate the CPU.
  • The ready queue can be ordered in various ways.
  • CPU scheduling decisions occur when a process switches from: running to waiting, running to ready, or waiting to ready states, and upon termination.
  • Scheduling under conditions 1 and 4 is nonpreemptive.
  • All other scheduling types area preemptive.
  • Preemptive scheduling needs consideration for shared data access, kernel mode preemption, and handling interrupts during OS activities.

Dispatcher

  • The dispatcher module grants CPU control to the process selected by the short-term scheduler.
  • This involves context switching, switching to user mode, and jumping to the appropriate location in the user program to restart it.
  • Dispatch latency is the time it takes for the dispatcher to stop one process and start another.

Scheduling Criteria

  • CPU utilization aims to keep the CPU as busy as possible.
  • Throughput refers to the number of processes completing their execution per time unit.
  • Turnaround time is the total time to execute a particular process.
  • Waiting time is the time a process spends waiting in the ready queue.
  • Response time is the time from request submission to the first response (relevant in time-sharing environments).

Scheduling Algorithm Optimization Criteria

  • Aim for maximum CPU utilization and throughput.
  • Aim for minimum turnaround time, waiting time, and response time.

First-Come, First-Served (FCFS) Scheduling

  • FCFS is a scheduling algorithm.
  • Suppose processes arrive in the order P1, P2, P3 with burst times 24, 3, and 3 respectively.
  • The Gantt Chart visually represents the schedule execution.
  • The waiting time for P1 is 0, for P2 is 24, and for P3 is 27.
  • The average waiting time is (0 + 24 + 27)/3 = 17.
  • If the processes arrive in the order P2, P3, P1, the Gantt chart changes, the waiting time for P1 is 6, for P2 is 0, and for P3 is 3.
  • The average waiting time becomes (6 + 0 + 3)/3 = 3, which is much better than the previous case.
  • FCFS can lead to the convoy effect, where a short process is delayed behind a long process. The convoy effect occurs when one CPU-bound process is followed by many I/O-bound processes.

Shortest-Job-First (SJF) Scheduling

  • Associates each process with the length of its next CPU burst, scheduling the process with the shortest time.
  • SJF is optimal, as it gives the minimum average waiting time for a set of processes.
  • A challenge is knowing the length of the next CPU request, potentially solved by user input.
  • An example using SJF is with processes P1, P2, P3, P4 having burst times 6, 8, 7, 3; the average waiting time calculates to (3 + 16 + 9 + 0) / 4 = 7.
  • Shortest-remaining-time-first adds varying arrival times and preemption to the analysis.
  • Consider P1, P2, P3, P4 arriving at times 0, 1, 2, 3 with burst times 8, 4, 9, 5; the average waiting time calculates to [(10-1)+(1-1)+(17-2)+5-3)]/4 = 6.5 msec.

Priority Scheduling

  • Associates a priority number with each process.
  • CPU allocation goes to the process with the highest priority (smallest integer often denotes highest priority).
  • Priority scheduling can be preemptive or nonpreemptive.
  • SJF is a form of priority scheduling where priority is the inverse of the predicted next CPU burst time.
  • A problem with priority scheduling is starvation.
  • Aging is a solution, as time progresses, the priority of the process increases.
  • An example; processes P1, P2, P3, P4, P5 with burst times 10, 1, 2, 1, 5 and priorities 3, 1, 4, 5, 2; the average waiting time is 8.2 msec.

Round Robin (RR)

  • Each process gets a small unit of CPU time, known at the time quantum, usually 10-100 milliseconds.
  • After its time is elapsed, the process is preempted and added to the end of the ready queue.
  • If there are n processes in the ready queue and the time quantum is q, each process gets 1/n of the CPU time in chunks of at most q time units.
  • No process waits more than (n-1)q time units.
  • A timer interrupts every quantum to schedule the next process.
  • If q is large, RR behaves like FIFO. If q is small, q has to be sufficiently large with respect to context switch, otherwise overhead is too high.
  • Using a time quantum of 4, for processes P1, P2, and P3 with burst times 24, 3, 3 respectively, RR typically results in a higher average turnaround than SJF, but offers better response.
  • Choose q large compared to context switch time.
  • In practice, q is usually 10ms to 100ms, while context switch is less than 10 usec.

Multilevel Queue

  • A ready queue is partitioned into separate queues like foreground (interactive) and background (batch).
  • Each queue employs its own scheduling algorithm; for example, foreground – RR, background – FCFS.
  • Scheduling must occur between the queues involving, fixed priority scheduling or time slice allocation.

Multilevel Feedback Queue

  • Allows processes to move between queues, implementing aging.
  • The scheduler is defined by parameters like number of queues,scheduling algorithms for each queue, methods to upgrade or demote a process, or determine which queue a process will enter.
  • An example of queues is: Q0 – RR with time quantum 8 milliseconds, Q1 – RR time quantum 16 milliseconds, and Q2 – FCFS.
  • A new job enters queue Q0, served FCFS, and receives 8 milliseconds of CPU time. If it does not finish, it moves to queue Q1. At Q1, it receives 16 additional milliseconds. If still incomplete, it is preempted and moved to queue Q2.

Multiple-Processor Scheduling

  • Scheduling becomes more complex with multiple CPUs.
  • Involves homogeneous processors within a multiprocessor.
  • Asymmetric multiprocessing designates one processor to access system data structures, reducing data sharing needs.
  • Symmetric multiprocessing (SMP) allows each processor to self-schedule processes from a common ready queue or private ready queues.
  • Processor affinity means a process has a preference for the processor it is currently running on, which can be soft or hard.

Multicore Processors

  • A recent trend places multiple processor cores on the same physical chip, which is faster and consumes less power.
  • Multiple threads per core are also growing in popularity.
  • Multithreading takes advantage of memory stall to progress on another thread while memory retrieve happens.

Real-Time CPU Scheduling

  • Presents unique challenges.
  • Soft real-time systems do not offer guarantees on when critical processes will be scheduled.
  • Hard real-time systems require tasks to be serviced by their deadline.
  • Two types of latencies affect performance: interrupt latency and dispatch latency.
  • Dispatch latency includes preemption of kernel-mode processes and release of resources by low-priority processes.
  • For real-time scheduling, schedulers must support preemptive, priority-based scheduling; this mainly guarantees soft real-time.
  • For hard real-time, must also provide ability to meet deadlines.
  • Processes have periodic characteristics, which require CPU at constant time intervals.

Virtualization and Scheduling

  • Virtualization software schedules multiple guests onto CPU(s).
  • Each guest performs its own scheduling, without knowledge of CPU ownership.
  • This can lead to poor response time and affect time-of-day clocks.
  • Scheduling efforts from guests can be undone.

Operating System Examples

  • Linux scheduling and Windows scheduling.

Linux Scheduling

  • The Completely Fair Scheduler (CFS) is used
  • Employs scheduling classes, each with a specific priority.
  • The scheduler selects the highest priority task within the highest scheduling class using a proportion of CPU time.
  • Two scheduling classes included: the default and real-time.
  • Quantum is calculated based on the nice value with a range from -20 to +19, a lower value is a higher priority.
  • It calculates target latency, which increases with the number of active tasks.
  • CFS maintains virtual run time for each task, which employs a decay factor based on task priority, normal default priority yields virtual run time.
  • To determine run, the scheduler picks the task with the lowest virtual run time.
  • Real-time scheduling adheres to POSIX.1b standards and real-time tasks have static priorities.
  • The range of nice values from -20 to +19 maps to global priority ranges.

Windows Scheduling

  • Employs priority-based preemptive scheduling.
  • The highest-priority thread always runs next; the dispatcher thread.
  • A thread runs until it blocks or is preempted by a higher-priority thread.
  • Real-time threads can preempt non-real-time threads.
  • It uses a 32-level priority scheme.
  • The variable class ranges from 1-15, and the real-time class ranges from 16-31.
  • Priority 0 is the memory-management thread and a queue for each priority.
  • If no thread is run-able, it runs an idle thread.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

CPU Scheduling and Multiprogramming Quiz
5 questions
Process part 3
22 questions

Process part 3

RaptQuasimodo avatar
RaptQuasimodo
CPU Scheduling Concepts
33 questions

CPU Scheduling Concepts

SatisfactoryRhenium2021 avatar
SatisfactoryRhenium2021
Use Quizgecko on...
Browser
Browser