Podcast
Questions and Answers
What is the primary goal of CPU scheduling in operating systems?
What is the primary goal of CPU scheduling in operating systems?
- To minimize memory usage
- To reduce the number of input/output operations
- To prevent system crashes
- To maximize CPU utilization and throughput (correct)
In CPU scheduling, a CPU burst refers to the period when a process is waiting for I/O operations to complete.
In CPU scheduling, a CPU burst refers to the period when a process is waiting for I/O operations to complete.
False (B)
What is the role of a short-term scheduler (CPU scheduler)?
What is the role of a short-term scheduler (CPU scheduler)?
selects from among the processes in ready queue, and allocates the CPU to one of them
The time it takes for the dispatcher to stop one process and start another is known as ______ latency.
The time it takes for the dispatcher to stop one process and start another is known as ______ latency.
Match the following CPU scheduling criteria with their descriptions:
Match the following CPU scheduling criteria with their descriptions:
Which of the following scheduling algorithm optimization criteria aims to minimize the time a process spends waiting in the ready queue?
Which of the following scheduling algorithm optimization criteria aims to minimize the time a process spends waiting in the ready queue?
In First-Come, First-Served (FCFS) scheduling, the process with the shortest burst time is always executed first.
In First-Come, First-Served (FCFS) scheduling, the process with the shortest burst time is always executed first.
What is the convoy effect in the context of FCFS scheduling?
What is the convoy effect in the context of FCFS scheduling?
The Shortest-Job-First (SJF) scheduling algorithm is optimal because it gives ______ average waiting time for a given set of processes.
The Shortest-Job-First (SJF) scheduling algorithm is optimal because it gives ______ average waiting time for a given set of processes.
What is a major challenge in implementing the Shortest-Job-First (SJF) scheduling algorithm?
What is a major challenge in implementing the Shortest-Job-First (SJF) scheduling algorithm?
Priority scheduling always favors processes with longer CPU burst times.
Priority scheduling always favors processes with longer CPU burst times.
What is the starvation problem in priority scheduling, and how can it be addressed?
What is the starvation problem in priority scheduling, and how can it be addressed?
In the Round Robin (RR) scheduling algorithm, each process gets a small unit of CPU time known as a(n) ______.
In the Round Robin (RR) scheduling algorithm, each process gets a small unit of CPU time known as a(n) ______.
In Round Robin scheduling, what happens after a process uses its time quantum?
In Round Robin scheduling, what happens after a process uses its time quantum?
A large time quantum in Round Robin scheduling effectively makes it behave like First-Come, First-Served (FCFS).
A large time quantum in Round Robin scheduling effectively makes it behave like First-Come, First-Served (FCFS).
Describe how multilevel queue scheduling works.
Describe how multilevel queue scheduling works.
In multilevel queue scheduling, processes are ______ assigned to a specific queue and do not typically move between queues.
In multilevel queue scheduling, processes are ______ assigned to a specific queue and do not typically move between queues.
What is a key characteristic of the Multilevel Feedback Queue scheduling algorithm?
What is a key characteristic of the Multilevel Feedback Queue scheduling algorithm?
In symmetric multiprocessing (SMP), only one processor accesses the system data structures to avoid data sharing issues.
In symmetric multiprocessing (SMP), only one processor accesses the system data structures to avoid data sharing issues.
Define processor affinity in the context of multi-processor scheduling.
Define processor affinity in the context of multi-processor scheduling.
Placing multiple processor cores on the same physical chip is known as ______ processors.
Placing multiple processor cores on the same physical chip is known as ______ processors.
What is the primary advantage of multicore processors?
What is the primary advantage of multicore processors?
Soft real-time systems provide a strict guarantee on when a critical real-time process will be scheduled.
Soft real-time systems provide a strict guarantee on when a critical real-time process will be scheduled.
What are the two types of latencies that affect performance in real-time systems?
What are the two types of latencies that affect performance in real-time systems?
In real-time scheduling, a ______ task requires the CPU at constant intervals.
In real-time scheduling, a ______ task requires the CPU at constant intervals.
In real-time scheduling, the rate of a periodic task is defined as:
In real-time scheduling, the rate of a periodic task is defined as:
Virtualization software is fully aware of the CPU needs and scheduling algorithms of the guest operating systems it hosts.
Virtualization software is fully aware of the CPU needs and scheduling algorithms of the guest operating systems it hosts.
How does virtualization affect scheduling?
How does virtualization affect scheduling?
The Completely Fair Scheduler (CFS) in Linux bases its scheduling decisions on the ______ of CPU time each task has received.
The Completely Fair Scheduler (CFS) in Linux bases its scheduling decisions on the ______ of CPU time each task has received.
In the Linux Completely Fair Scheduler (CFS), a lower 'nice' value indicates:
In the Linux Completely Fair Scheduler (CFS), a lower 'nice' value indicates:
In Linux scheduling, real-time tasks have dynamic priorities that change based on system load.
In Linux scheduling, real-time tasks have dynamic priorities that change based on system load.
What approach does windows use for scheduling?
What approach does windows use for scheduling?
In Windows scheduling, the ______ selects the next thread to run.
In Windows scheduling, the ______ selects the next thread to run.
In Windows scheduling, real-time threads:
In Windows scheduling, real-time threads:
An idle thread in Windows is run when there are no run-able threads available.
An idle thread in Windows is run when there are no run-able threads available.
Flashcards
CPU-I/O Burst Cycle
CPU-I/O Burst Cycle
Process execution consists of alternating CPU execution and I/O wait periods.
Dispatch Latency
Dispatch Latency
The time it takes for the dispatcher to stop one process and start another.
CPU utilization
CPU utilization
Keep the CPU as busy as possible to maximize resource use.
Throughput
Throughput
Signup and view all the flashcards
Turnaround time
Turnaround time
Signup and view all the flashcards
Waiting time
Waiting time
Signup and view all the flashcards
Response time
Response time
Signup and view all the flashcards
Convoy Effect
Convoy Effect
Signup and view all the flashcards
Shortest-Job-First (SJF)
Shortest-Job-First (SJF)
Signup and view all the flashcards
Priority Scheduling
Priority Scheduling
Signup and view all the flashcards
Starvation
Starvation
Signup and view all the flashcards
Aging
Aging
Signup and view all the flashcards
Round Robin (RR)
Round Robin (RR)
Signup and view all the flashcards
Multilevel Queue
Multilevel Queue
Signup and view all the flashcards
Multilevel Feedback Queue
Multilevel Feedback Queue
Signup and view all the flashcards
Multiple-Processor Scheduling
Multiple-Processor Scheduling
Signup and view all the flashcards
Symmetric Multiprocessing
Symmetric Multiprocessing
Signup and view all the flashcards
Processor Affinity
Processor Affinity
Signup and view all the flashcards
Soft real-time systems
Soft real-time systems
Signup and view all the flashcards
Hard real-time systems
Hard real-time systems
Signup and view all the flashcards
Interrupt Latency
Interrupt Latency
Signup and view all the flashcards
Dispatch Latency
Dispatch Latency
Signup and view all the flashcards
Periodic
Periodic
Signup and view all the flashcards
Virtualization Scheduling
Virtualization Scheduling
Signup and view all the flashcards
Linux Scheduling
Linux Scheduling
Signup and view all the flashcards
vruntime
vruntime
Signup and view all the flashcards
Target Latency
Target Latency
Signup and view all the flashcards
Windows Scheduling
Windows Scheduling
Signup and view all the flashcards
Study Notes
- Chapter 6 focuses on CPU scheduling
Basic Concepts
- CPU utilization is maximized through multiprogramming.
- Process execution occurs in a CPU-I/O burst cycle.
- The CPU burst is followed by an I/O burst.
- CPU burst distribution is a primary concern in scheduling.
CPU Scheduler
- The short-term scheduler selects a process from the ready queue to allocate the CPU.
- The ready queue can be ordered in various ways.
- CPU scheduling decisions occur when a process switches from: running to waiting, running to ready, or waiting to ready states, and upon termination.
- Scheduling under conditions 1 and 4 is nonpreemptive.
- All other scheduling types area preemptive.
- Preemptive scheduling needs consideration for shared data access, kernel mode preemption, and handling interrupts during OS activities.
Dispatcher
- The dispatcher module grants CPU control to the process selected by the short-term scheduler.
- This involves context switching, switching to user mode, and jumping to the appropriate location in the user program to restart it.
- Dispatch latency is the time it takes for the dispatcher to stop one process and start another.
Scheduling Criteria
- CPU utilization aims to keep the CPU as busy as possible.
- Throughput refers to the number of processes completing their execution per time unit.
- Turnaround time is the total time to execute a particular process.
- Waiting time is the time a process spends waiting in the ready queue.
- Response time is the time from request submission to the first response (relevant in time-sharing environments).
Scheduling Algorithm Optimization Criteria
- Aim for maximum CPU utilization and throughput.
- Aim for minimum turnaround time, waiting time, and response time.
First-Come, First-Served (FCFS) Scheduling
- FCFS is a scheduling algorithm.
- Suppose processes arrive in the order P1, P2, P3 with burst times 24, 3, and 3 respectively.
- The Gantt Chart visually represents the schedule execution.
- The waiting time for P1 is 0, for P2 is 24, and for P3 is 27.
- The average waiting time is (0 + 24 + 27)/3 = 17.
- If the processes arrive in the order P2, P3, P1, the Gantt chart changes, the waiting time for P1 is 6, for P2 is 0, and for P3 is 3.
- The average waiting time becomes (6 + 0 + 3)/3 = 3, which is much better than the previous case.
- FCFS can lead to the convoy effect, where a short process is delayed behind a long process. The convoy effect occurs when one CPU-bound process is followed by many I/O-bound processes.
Shortest-Job-First (SJF) Scheduling
- Associates each process with the length of its next CPU burst, scheduling the process with the shortest time.
- SJF is optimal, as it gives the minimum average waiting time for a set of processes.
- A challenge is knowing the length of the next CPU request, potentially solved by user input.
- An example using SJF is with processes P1, P2, P3, P4 having burst times 6, 8, 7, 3; the average waiting time calculates to (3 + 16 + 9 + 0) / 4 = 7.
- Shortest-remaining-time-first adds varying arrival times and preemption to the analysis.
- Consider P1, P2, P3, P4 arriving at times 0, 1, 2, 3 with burst times 8, 4, 9, 5; the average waiting time calculates to [(10-1)+(1-1)+(17-2)+5-3)]/4 = 6.5 msec.
Priority Scheduling
- Associates a priority number with each process.
- CPU allocation goes to the process with the highest priority (smallest integer often denotes highest priority).
- Priority scheduling can be preemptive or nonpreemptive.
- SJF is a form of priority scheduling where priority is the inverse of the predicted next CPU burst time.
- A problem with priority scheduling is starvation.
- Aging is a solution, as time progresses, the priority of the process increases.
- An example; processes P1, P2, P3, P4, P5 with burst times 10, 1, 2, 1, 5 and priorities 3, 1, 4, 5, 2; the average waiting time is 8.2 msec.
Round Robin (RR)
- Each process gets a small unit of CPU time, known at the time quantum, usually 10-100 milliseconds.
- After its time is elapsed, the process is preempted and added to the end of the ready queue.
- If there are n processes in the ready queue and the time quantum is q, each process gets 1/n of the CPU time in chunks of at most q time units.
- No process waits more than (n-1)q time units.
- A timer interrupts every quantum to schedule the next process.
- If q is large, RR behaves like FIFO. If q is small, q has to be sufficiently large with respect to context switch, otherwise overhead is too high.
- Using a time quantum of 4, for processes P1, P2, and P3 with burst times 24, 3, 3 respectively, RR typically results in a higher average turnaround than SJF, but offers better response.
- Choose q large compared to context switch time.
- In practice, q is usually 10ms to 100ms, while context switch is less than 10 usec.
Multilevel Queue
- A ready queue is partitioned into separate queues like foreground (interactive) and background (batch).
- Each queue employs its own scheduling algorithm; for example, foreground – RR, background – FCFS.
- Scheduling must occur between the queues involving, fixed priority scheduling or time slice allocation.
Multilevel Feedback Queue
- Allows processes to move between queues, implementing aging.
- The scheduler is defined by parameters like number of queues,scheduling algorithms for each queue, methods to upgrade or demote a process, or determine which queue a process will enter.
- An example of queues is: Q0 – RR with time quantum 8 milliseconds, Q1 – RR time quantum 16 milliseconds, and Q2 – FCFS.
- A new job enters queue Q0, served FCFS, and receives 8 milliseconds of CPU time. If it does not finish, it moves to queue Q1. At Q1, it receives 16 additional milliseconds. If still incomplete, it is preempted and moved to queue Q2.
Multiple-Processor Scheduling
- Scheduling becomes more complex with multiple CPUs.
- Involves homogeneous processors within a multiprocessor.
- Asymmetric multiprocessing designates one processor to access system data structures, reducing data sharing needs.
- Symmetric multiprocessing (SMP) allows each processor to self-schedule processes from a common ready queue or private ready queues.
- Processor affinity means a process has a preference for the processor it is currently running on, which can be soft or hard.
Multicore Processors
- A recent trend places multiple processor cores on the same physical chip, which is faster and consumes less power.
- Multiple threads per core are also growing in popularity.
- Multithreading takes advantage of memory stall to progress on another thread while memory retrieve happens.
Real-Time CPU Scheduling
- Presents unique challenges.
- Soft real-time systems do not offer guarantees on when critical processes will be scheduled.
- Hard real-time systems require tasks to be serviced by their deadline.
- Two types of latencies affect performance: interrupt latency and dispatch latency.
- Dispatch latency includes preemption of kernel-mode processes and release of resources by low-priority processes.
- For real-time scheduling, schedulers must support preemptive, priority-based scheduling; this mainly guarantees soft real-time.
- For hard real-time, must also provide ability to meet deadlines.
- Processes have periodic characteristics, which require CPU at constant time intervals.
Virtualization and Scheduling
- Virtualization software schedules multiple guests onto CPU(s).
- Each guest performs its own scheduling, without knowledge of CPU ownership.
- This can lead to poor response time and affect time-of-day clocks.
- Scheduling efforts from guests can be undone.
Operating System Examples
- Linux scheduling and Windows scheduling.
Linux Scheduling
- The Completely Fair Scheduler (CFS) is used
- Employs scheduling classes, each with a specific priority.
- The scheduler selects the highest priority task within the highest scheduling class using a proportion of CPU time.
- Two scheduling classes included: the default and real-time.
- Quantum is calculated based on the nice value with a range from -20 to +19, a lower value is a higher priority.
- It calculates target latency, which increases with the number of active tasks.
- CFS maintains virtual run time for each task, which employs a decay factor based on task priority, normal default priority yields virtual run time.
- To determine run, the scheduler picks the task with the lowest virtual run time.
- Real-time scheduling adheres to POSIX.1b standards and real-time tasks have static priorities.
- The range of nice values from -20 to +19 maps to global priority ranges.
Windows Scheduling
- Employs priority-based preemptive scheduling.
- The highest-priority thread always runs next; the dispatcher thread.
- A thread runs until it blocks or is preempted by a higher-priority thread.
- Real-time threads can preempt non-real-time threads.
- It uses a 32-level priority scheme.
- The variable class ranges from 1-15, and the real-time class ranges from 16-31.
- Priority 0 is the memory-management thread and a queue for each priority.
- If no thread is run-able, it runs an idle thread.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.