OS Final Revision 2025 Past Paper (Part 2) PDF
Document Details
Uploaded by Deleted User
2025
Tags
Summary
This document is an operating systems past paper for the final revision (part 2) covering material for the year 2025. It contains 50 multiple-choice questions. Topics include process management, memory management, scheduling, and inter-process communication, along with computer systems concepts.
Full Transcript
Operating System Final Revision (Part 2) 2025 متنساش تذاكر مذكره الميد تيرم تانى كويس جدا 1. Which of the following is a component of the computer system structure? - A. Hardware - B. Operating System - C. Applica...
Operating System Final Revision (Part 2) 2025 متنساش تذاكر مذكره الميد تيرم تانى كويس جدا 1. Which of the following is a component of the computer system structure? - A. Hardware - B. Operating System - C. Application Programs - D. All of the above 2. Which of the following is not true about the bootstrap program? - A. Loaded at power-up or reboot - B. Loads application programs - C. Loads operating system kernel - D. Stored in read-only memory (ROM) 3. Which of the following is not true for the process management activities? - A. Providing mechanisms for deadlock handling - B. Creating and deleting user processes only - C. Providing mechanisms for process communication - D. Providing mechanisms for process synchronization 4. What is the situation in which every process in a set of processes is waiting for an event that can be caused only by another process in the set? - A. Recovery - B. Deadlock - C. Swapping - D. Fragmentation 5. "Information in use copied from slower to faster storage temporarily", this concept is called: - A. Context Switching - B. Moving - C. Caching - D. Transferring 6. Which of the following is a type of computing that delivers computing, storage, and even applications as a service across a network? - A. Real-Time Embedded Systems - B. Cloud Computing - C. Mobile Systems - D. All of the above 7. Which of the following allows operating systems to run as applications within other operating systems? - A. Virtualization - B. Client Server - C. Peer-to-Peer - D. Mobile Systems |P a g e 1 8. DMA stands for: - A. Data Memory Access - B. Data Memory Available - C. Direct Memory Access - D. None of the above 9. Which of the following is an advantage of the multiprocessor systems? - A. Economy of scale - B. Increased throughput - C. Increased reliability - D. All of the above 10. The standard Arduino platform does not provide an operating system; instead, a small piece of software is used which is known as: - A. Boot loader - B. Booter - C. Sketch - D. None of the above 11. Which of the following is responsible to combine the relocatable object files into a single binary executable file? - A. Compiler - B. Linker - C. Loader - D. Assembler 12. Which of the following is responsible to bring the program into memory to be executed? - A. Compiler - B. Linker - C. Loader - D. Assembler 13. Which of the following operating system structure places all of the functionality of the kernel into a single, static binary file that runs in a single address space? - A. Monolithic - B. Layered - C. Microkernels - D. Loadable kernel modules 14. Which of the following methods is used to structure the operating system by removing all nonessential components from the kernel and implementing them as user-level programs that reside in separate address spaces? - A. Monolithic - B. Layered - C. Microkernels - D. Loadable kernel modules 15. What is the disadvantage of the Microkernels operating system structure? - A. It is difficult to extend - B. It is difficult to port the operating system to new architectures - C. The performance overhead of user space to kernel space communication - D. All of the above |P a g e 2 16. The system must save the state of the old process in its PCB and load the saved state for the new process by using which of the following? - A. Scheduling - B. Context switching - C. Swapping - D. None of the above 17. When the process is executing in the CPU and an interrupt occurs, it moves to which of the following states? - A. Running - B. Ready - C. Waiting - D. Terminated 18. Which one of the following is not one of the operating system services? - A. Resource allocation - B. Protection and Security - C. Error correction - D. Communications 19. In Linux, which of the following methods can be used to pass 3 parameters to the operating system? - A. Registers - B. Stack - C. Queue - D. Block of memory 20. Which of the following system call is used to create a new process in UNIX? - A. exec() - B. wait() - C. exit() - D. fork() 21. Which of the following contains memory dynamically allocated during program run time? - A. Text section - B. Stack section - C. Data section - D. Heap section 22. Each process that wants to communicate must explicitly name the recipient or sender of the communication. This is called: - A. Synchronous communication - B. Asynchronous communication - C. Direct communication - D. Indirect communication 23. Which of the following is not one of the four conditions that must hold simultaneously for the deadlock to occur in a system? - A. Mutual exclusion - B. Hold and Send - C. No preemption - D. Circular wait |P a g e 3 24. When a detection algorithm determines that a deadlock exists, which of the following methods can be used to recover from the deadlock? - A. Solve the problem manually - B. Using the process and thread termination to recover from the deadlock automatically - C. Using the resource preemption to recover from the deadlock automatically - D. All of the above 25. Consider two concurrently running processes P1 and P2 that with two statements S1 and S2 and the requirement that S1 to happen before S2. Select the correct solution using a semaphore "synch". - A. Create a semaphore "synch" initialized to 0 - B. Create a semaphore "synch" initialized to 0 - C. Create a semaphore "synch" initialized to 1 - D. Create a semaphore "synch" initialized to 0 26. To ensure that Peterson’s solution will work correctly on modern computer architecture we must use: - A. Test and Set - B. Compare and Swap - C. Atomic Variable - D. Memory Barrier 27. The Banker’s Algorithm is used for: - A. Deadlock Avoidance - B. Deadlock Prevention - C. Deadlock Detection - D. Deadlock Recovery |P a g e 4 28. At which of the following, the address binding of instructions and data to memory addresses can happen? - A. Compile time - B. Load time - C. Execution time - D. All of the above 29. Which of the following memory allocation algorithms will allocate the smallest hole that is big enough to satisfy a request of size n from a list of free holes? - A. First-fit - B. Best-fit - C. Worst-fit - D. None of the above 30. Which of the following is not a hardware instruction for solving the critical-section problem? - A. Test and Set - B. Mutex Lock - C. Compare and Swap - D. Memory Barrier 31. Which of the following is a solution to the critical-section problem using the semaphore? - A. Create a semaphore "mutex" initialized to 1. - B. Create a semaphore "mutex" initialized to 1. - C. Create a semaphore "mutex" initialized to 0. - D. Create a semaphore "mutex" initialized to 1. 32. The general structure of a process in the critical-section problem includes: - A. Entry Section and Exit Section - B. Remainder Section - C. Critical Section - D. All of the above 33. The sender process is blocked until the message is received by the receiving process or by the mailbox, in which of the following? - A. Synchronous communication - B. Non-blocking send - C. Blocking receive - D. All of the above |P a g e 5 34. A pair of processes communicating over a network employs a pair of sockets—one for each process. A socket is identified by: - A. IP address only - B. Port number only - C. IP address and port number - D. None of the above 35. When the CPU scheduling decisions may be taken? - A. A process terminates - B. A process switches from the waiting state to the running state - C. A process switches from the ready state to the running state - D. All of the above 36. Which one of the following is the number of processes that are completed their execution per time unit? - A. Waiting time - B. Turnaround time - C. CPU utilization - D. Throughput 37. Regarding the scheduling algorithm optimization criteria, It is desirable to: - A. Minimize CPU utilization - B. Minimize throughput - C. Minimize turnaround time - D. Maximize waiting time 38. Consider the following set of processes, with the length of the CPU burst time given in milliseconds: The processes are assumed to have arrived at time 0 in the order P1, P2, P3, P4, P5. Use this data to answer the following questions (from Question 38 to Question 46): Using the First Come, First Served (FCFS) algorithm, what is the waiting time of the process P3? - A. 8 - B. 2 - C. 3 - D. 11 39. Using the First Come, First Served (FCFS) algorithm, what is the average waiting time of all processes? - A. 6 - B. 7 - C. 8 - D. 9 40. Using the Shortest Job First (SJF) algorithm, what is the waiting time of process P4? - A. 5 - B. 1 - C. 8 - D. 3 |P a g e 6 41. Using the Shortest Job First (SJF) algorithm, what is the waiting time of process P1? - A. 0 - B. 1 - C. 2 - D. 3 42. Using the Shortest Job First (SJF) algorithm, what is the average waiting time of all processes? - A. 5.6 - B. 4.2 - C. 7.5 - D. 3.5 43. Using the Round Robin (RR) algorithm (time quantum=2), what is the waiting time of process P3? - A. 9 - B. 10 - C. 11 - D. 12 44. Using the Round Robin (RR) algorithm (time quantum=2), what is the waiting time of process P5? - A. 11 - B. 12 - C. 13 - D. 14 45. Using the Round Robin (RR) algorithm (time quantum=2), what is the average waiting time of all processes? - A. 6.8 - B. 5.4 - C. 6.2 - D. 8.5 46. Using the Round Robin (RR) algorithm (time quantum=2), what is the average turnaround time of all processes? - A. 9.5 - B. 8.6 - C. 11.2 - D. 10.6 47. In which of the following, once the CPU has been allocated to a process, the process keeps the CPU until it releases it. - A. Non-preemptive Scheduling - B. Preemptive Scheduling - C. Priority Scheduling - D. Round Robin Scheduling 48. In a multi-threaded, multicore processor, how many different levels of scheduling are required? - A. One - B. Two - C. Three - D. Four |P a g e 7 49. In which of the load balancing approaches, a specific task periodically checks the load on each processor, and if it finds an imbalance then moves threads from overloaded to idle or less-busy processors. - A. Pull migration - B. Soft affinity - C. Push migration - D. Hard affinity 50. Which of the following is one of the requirements that must be satisfied in the solution of the critical section problem? - A. Unbounded Waiting - B. Starvation - C. Circular wait - D. Mutual exclusion 1. The CPU utilization can be improved by using the direct memory access. True 2. Peer-to-Peer systems distinguish clients and servers and all nodes are considered peers. - True 3. Create process and terminate process are system calls of the category "Information maintenance". - False 4. A user application requests a service from the operating system via a hardware interrupt. - False 5. Tertiary storage devices like magnetic tapes are fast and large that they are used only for special purposes such as storing backups. - False 6. Interrupt transfers control to the interrupt service routine through the interrupt vector which contains the addresses of all the service routines. - True 7. In the bounded buffer problem, the producer must wait if the buffer is full and the consumer waits if the buffer is empty. - True 8. File system manipulation is one of the operating system services. - False 9. The main disadvantage of the Monolithic operating system structure is slow and inefficient. - True 10. When a process switches from the waiting state to the ready state after the I/O completion, the CPU scheduler may need to make a scheduling decision. - True 11. When a process creates a child process, that child process must share all resources with the parent process. - True 12. In the "direct communication" between processes, the link may be unidirectional or bi-directional. - False 13. The wait-for graph can not be used for deadlock detection in a system with multiple instances of each resource type. - True 14. In the Resource-Allocation Graph, if the graph contains no cycles then there is no deadlock. - True 15. In the dynamic linking, system libraries and program code combined by the loader into the binary program image. - True 16- Not all unsafe states are deadlocks and a deadlocked state may be not in an unsafe state. - False 17- The Peterson's solution could not satisfy the three requirements to solve the critical section problem. - True 18- To define the logical address space of a process, the base and limit registers are used. The base register holds the smallest legal physical memory address and the limit register specifies the size of the range. - True 19- When several processes access and manipulate the shared data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition. - False 20- The data section of a process memory layout contains temporary data when invoking functions. - False |P a g e 8 ------------------------------------------------------------------------------------------------------------------------ Suppose that the following processes arrive at time 0 in the order: P1, P2, P3, P4 | Process | Burst Time | |---------|------------| | P1 | 10 | | P2 |4 | | P3 |7 | | P4 |3 | Using the Shortest Job First (SJF) scheduling algorithm, what is the waiting time for process P2? - A. 10 - B. 3 - C. 14 Consider the following set of processes, with the length of the CPU burst time given in milliseconds: | Process | Burst Time | |---------|------------| | P1 |8 | | P2 | 10 | All processes arrive at time 0 in the following order: P1, P2 Using the First Come, First Served (FCFS) algorithm, what is the average waiting time? - A. 3 - B. 5 - C. 4 Consider the following set of processes, with the length of the CPU burst time given in milliseconds: | Process | Burst Time | |---------|------------| | P1 | 10 | | P2 |5 | All processes arrive at time 0 in the following order: P1, P2 Using the non-preemptive Shortest Job First (SJF) algorithm, what is the average turnaround time? - A. 10 - B. 15 - C. 20 - D. 5 Consider the following set of processes, with the length of the CPU burst time given in milliseconds: | Process | Burst Time | |---------|------------| | P1 |8 | | P2 | 10 | All processes arrive at time 0 in the following order: P1, P2 Using the Round Robin (RR) algorithm and time quantum = 4, what is the waiting time of P2? - A. 3 - B. 5 - C. 4 |P a g e 9 Consider the following set of processes, with the length of the CPU burst time given in milliseconds: | Process | Burst Time | |---------|------------| | P1 | 10 | | P2 |5 | All processes arrive at time 0 in the following order: P1, P2 Using the non-preemptive Shortest Job First (SJF) algorithm, what is the waiting time of P2? - A. 0 - B. 4 - C. 3 Regarding the scheduling algorithm optimization criteria, It is desirable to: - A. Minimize throughput - B. Maximize turnaround time - C. Maximize waiting time - D. Maximize CPU utilization Each process is represented in the operating system by a process control block (PCB). The PCB does not contain the accounting information like the amount of CPU used, clock time elapsed since start, time limits, account numbers, job or process numbers, etc. False The greatest advantage of UEFI is that it is a single, complete boot manager and therefore is faster than the multistage BIOS boot process. True PCB stands for: - A. None of the other selections - B. Process Control Base - C. Provides Control Block - D. Process Control Block Which of the following storage systems is non-volatile? - A. Hard-Disk - B. Cache - C. Main memory - D. Registers Multiprogramming decreases the CPU utilization as well as keeping users satisfied. False Threads share the code and data memory sections of the process to which they belong. True CLI stands for: - A. None of the other selections - B. Command Line Interpreter - C. Comment Line Interconnect - D. Connection Line Interface --------------------------------------------------------------------------------------------------------------------------------------------- A logical address is the address seen by the memory unit. False The swapping makes physical memory to accommodate less processes. False Paging is a memory management scheme that avoids the external fragmentation. True | P a g e 10 If the base register holds 100040 and the limit register is 120900, which of the following is a legal address? - A. 220990 - B. 100030 - C. 10050 - D. 110940 In the page table, if Valid-invalid bit = "valid" means that the associated page is not in the process' logical address space. False Consider a computer system with a 16-bit logical address and 2 KB page size. How many entries are there in the page table? - A. 25 - B. 24 - C. 26 - D. 23 Which of the following is not from the deadlock conditions? - A. Processes wait - B. Hold and wait - C. No preemption - D. Mutual exclusion -------------------------------------------------------------------------------------------------------------------------------------------- In a system with a single CPU core, how many processes can run at a time? - A) One - B) Two - C) Three - D) Four. What is the objective of multiprogramming? - A) To minimize CPU utilization - B) To have some process running at all times, maximizing CPU utilization - C) To reduce the number of processes in memory - D) To increase the waiting time of processes 1. In a system with a single CPU core, only __________ process can run at a time. - one 2. The objective of multiprogramming is to have some process running at all times, to maximize __________. - CPU utilization 1. The objective of multiprogramming is to minimize CPU utilization. (False) 2. In a single CPU core system, multiple processes can run simultaneously. (False) 1. Define Multiprogramming. - Multiprogramming is a technique that allows multiple processes to reside in memory simultaneously, with the CPU switching between them to maximize utilization. 2. What is the role of the CPU in a single-core system? - The CPU in a single-core system can only execute one process at a time, switching between processes to give the illusion of multitasking. --- 2. In a non-multiprogrammed system, what happens when a process waits for I/O? - A) The CPU switches to another process - B) The CPU sits idle - C) The process terminates - D) The CPU utilization increases | P a g e 11 1. In a non-multiprogrammed system, the CPU sits __________ when a process waits for I/O. - idle 2. Multiprogramming helps in utilizing the CPU __________. - efficiently 1. Multiprogramming helps in utilizing the CPU efficiently. (True) 2. In a non-multiprogrammed system, the CPU is always busy. (False) 1. Define CPU Utilization. - CPU Utilization is the percentage of time the CPU is kept busy executing processes. 2. What is the role of I/O in CPU scheduling? - I/O operations cause processes to wait, and the CPU scheduler switches to another process to keep the CPU busy during this waiting time. --- 1. What happens in a multiprogramming system when one process has to wait? - A) The CPU sits idle - B) The operating system switches the CPU to another process - C) The process terminates - D) The CPU utilization decreases 2. In a multiprogramming system, several processes are kept in __________ at one time. - A) Disk - B) Memory - C) Cache - D) Register 1. In a multiprogramming system, several processes are kept in __________ at one time. - memory 2. When one process has to wait, the operating system takes the CPU away from that process and gives it to __________. - another process 1. In a non-multiprogrammed system, the CPU is always busy. - (False) 2. Multiprogramming increases CPU utilization. - (True) 1. Define Ready Queue. - The Ready Queue is a queue that holds processes that are waiting to be allocated the CPU. 2. What is the role of the Operating System in CPU scheduling? - The Operating System manages the allocation of the CPU to processes, ensuring efficient utilization and fair scheduling. --- 1. What is the CPU-I/O Burst Cycle? - A) A cycle where the CPU is always busy - B) A cycle where processes alternate between CPU execution and I/O wait - C) A cycle where I/O operations are minimized - D) A cycle where CPU bursts are always long 2. What does the success of CPU scheduling depend on? - A) The number of processes in memory - B) The observed property of processes, specifically the CPU-I/O Burst Cycle - C) The size of the CPU cache - D) The speed of the I/O devices 1. Process execution begins with a __________ burst followed by an I/O burst. - CPU 2. The CPU-I/O Burst Cycle consists of alternating periods of __________ and __________. - CPU execution, I/O wait | P a g e 12 1. The final CPU burst ends with a system request to terminate execution. (True) 2. The CPU-I/O Burst Cycle is not important for CPU scheduling. (False) 1. Define CPU Burst. - A CPU Burst is the period of time during which a process is executing instructions on the CPU. 2. What is an I/O Burst? - An I/O Burst is the period of time during which a process is waiting for an I/O operation to complete. --- 1. What is the typical distribution of CPU burst times? - A) A large number of long CPU bursts - B) A small number of short CPU bursts - C) A large number of short CPU bursts and a small number of long CPU bursts - D) An equal number of short and long CPU bursts 2. What type of program typically has many short CPU bursts? - A) CPU-bound programs - B) I/O-bound programs - C) Memory-bound programs 1. An I/O-bound program typically has many __________ CPU bursts. - short 2. A CPU-bound program might have a few __________ CPU bursts. - long 1. CPU-bound programs typically have many short CPU bursts. (False) 2. The distribution of CPU burst times is important for implementing CPU-scheduling algorithms. (True) 1. Define I/O-bound Program. - An I/O-bound program is a program that spends most of its time waiting for I/O operations to complete, resulting in many short CPU bursts. 2. What is a CPU-bound Program? - A CPU-bound program is a program that spends most of its time executing instructions on the CPU, resulting in fewer but longer CPU bursts. --- 1. What does the CPU scheduler do? - A) Manages memory allocation - B) Selects a process from the ready queue and allocates a CPU core to it - C) Handles I/O operations - D) Terminates processes 2. How can a ready queue be implemented? - A) As a stack - B) As a FIFO queue, a priority queue, a tree, or an unordered linked list - C) As a circular buffer - D) As a hash table 1. The ready queue can be implemented as a __________ queue. - FIFO 2. The records in the ready queue are generally __________ of the processes. - process control blocks (PCBs) 1. The ready queue is always implemented as a FIFO queue. (False) 2. The CPU scheduler selects processes from the ready queue based on their priority. (True) 1. Define Ready Queue. - The Ready Queue is a queue that holds processes that are waiting to be allocated the CPU. | P a g e 13 2. What is a Process Control Block (PCB)? - A Process Control Block (PCB) is a data structure that contains information about a process, such as its state, program counter, and CPU registers. --- 1. When does CPU scheduling take place? - A) Only when a process terminates - B) When a process switches from running to waiting, running to ready, waiting to ready, or terminates - C) Only when a new process is created - D) Only when an interrupt occurs 2. In which situations does the CPU scheduler have no choice in scheduling? - A) When a process switches from running to waiting or terminates - B) When a process switches from running to ready - C) When a process switches from waiting to ready - D) When a new process is created - A) When a process switches from running to waiting or terminates 1. CPU scheduling decisions may take place when a process switches from __________ to __________ state. - running, waiting 1. CPU scheduling decisions are only made when a process terminates. (False) 2. In preemptive scheduling, the CPU can be taken away from a process even if it has not completed its execution. - (True) 1. Define CPU Scheduling. - CPU Scheduling is the process of selecting which process will be allocated the CPU at any given time. 2. What is the role of the CPU Scheduler? - The CPU Scheduler selects processes from the ready queue and allocates the CPU to them based on a specific scheduling algorithm. --- 1. What is nonpreemptive scheduling? - A) The CPU can be taken away from a process at any time - B) The process keeps the CPU until it releases it by terminating or switching to the waiting state - C) The CPU is allocated based on priority - D) The CPU is shared equally among all processes 2. What is preemptive scheduling? - A) The CPU can be taken away from a process at any time - B) The process keeps the CPU until it releases it by terminating or switching to the waiting state - C) The CPU is allocated based on priority - D) The CPU is shared equally among all processes 1. In preemptive scheduling, the CPU can be taken away from a process even if it has not __________. - completed its execution 2. Under Nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases it either by __________ or by switching to the waiting state. - terminating 1. Nonpreemptive scheduling is also known as cooperative scheduling. (True) 2. Preemptive scheduling is always better than nonpreemptive scheduling. (False) 1. Define Nonpreemptive Scheduling. - Nonpreemptive Scheduling is a scheduling scheme where the CPU is allocated to a process, and the process keeps the CPU until it releases it by terminating or switching to the waiting state. | P a g e 14 2. What is Preemptive Scheduling? - Preemptive Scheduling is a scheduling scheme where the CPU can be taken away from a process even if it has not completed its execution. -- 1. What is a potential issue with preemptive scheduling? - A) Increased CPU utilization - B) Race conditions when data are shared among processes - C) Reduced throughput - D) Longer waiting times 2. Which operating systems use preemptive scheduling algorithms? - A) Windows, macOS, Linux, and UNIX - B) Only Windows - C) Only Linux - D) Only macOS 1. Most modern operating systems use __________ scheduling algorithms. - preemptive 2. Preemptive scheduling can result in __________ when data are shared among several processes. - race conditions 1. Preemptive scheduling is not used in modern operating systems. (False) 2. Race conditions occur when two processes try to access shared data simultaneously. (True) 1. Define Race Condition. - A Race Condition occurs when two or more processes access shared data concurrently, and the final outcome depends on the order of execution. 2. What is the role of the Operating System Kernel in scheduling? - The Operating System Kernel manages the scheduling of processes, ensuring that the CPU is allocated efficiently and fairly. --- 1. What is the role of the dispatcher? - A) To manage memory allocation - B) To give control of the CPU to the process selected by the CPU scheduler - C) To handle I/O operations - D) To terminate processes 2. What does the dispatcher involve during a context switch? - A) Switching context from one process to another - B) Allocating memory to a new process - C) Terminating a process - D) Handling I/O requests 1. The dispatcher involves __________ switching from one process to another. - context 2. The dispatcher should be as __________ as possible, since it is invoked during every context switch. - fast 1. The dispatcher is responsible for allocating memory to processes. (False) 2. The dispatcher is invoked during every context switch. (True) 1. Define Dispatcher. - The Dispatcher is a module that gives control of the CPU to the process selected by the CPU scheduler, involving context switching, switching to user mode, and jumping to the proper location in the user program. 2. What is Context Switching? - Context Switching is the process of saving the state of one process and loading the state of another process so that the CPU can switch between them. | P a g e 15 1. What is the main focus of the "Scheduling Criteria" section? - A) To compare different CPU-scheduling algorithms - B) To discuss memory management techniques - C) To explain process synchronization - D) To describe deadlock prevention 2. Which of the following is NOT a scheduling criterion? - A) CPU utilization - B) Throughput - C) Memory allocation - D) Turnaround time 1. Scheduling criteria include CPU utilization, throughput, turnaround time, waiting time, and __________. - response time 2. The goal of scheduling algorithm optimization is to maximize __________ and throughput, and to minimize turnaround time, waiting time, and response time. - CPU utilization 1. CPU utilization is not an important criterion for comparing scheduling algorithms. (False) 2. Response time is the time it takes to output the response, not to start responding. (False) 1. Define Throughput. - Throughput is the number of processes that are completed per time unit. 2. What is Turnaround Time? - Turnaround Time is the amount of time to execute a particular process, from submission to completion. --- 1. What is CPU utilization? - A) The number of processes completed per time unit - B) The percentage of time the CPU is kept busy - C) The amount of time to execute a particular process - D) The amount of time a process waits in the ready queue 2. What is turnaround time? - A) The amount of time a process waits in the ready queue - B) The amount of time to execute a particular process, from submission to completion - C) The time it takes to output the response - D) The number of processes completed per time unit 1. __________is the sum of the periods spent waiting in the ready queue, executing on the CPU, and doing I/O. - Turnaround time 2. __________ is the amount of time it takes from when a request was submitted until the first response is produced. - Response time 1. Response time is the time it takes to output the response, not to start responding. (False) 2. Turnaround time includes the time spent waiting in the ready queue, executing on the CPU, and doing I/O. (True) 1. Define Waiting Time. - Waiting Time is the amount of time a process has been waiting in the ready queue. 2. What is Response Time? - Response Time is the amount of time it takes from when a request was submitted until the first response is produced. | P a g e 16 1. What is the goal of scheduling algorithm optimization? - A) To minimize CPU utilization - B) To maximize turnaround time - C) To maximize CPU utilization and throughput, and to minimize turnaround time, waiting time, and response time - D) To increase waiting time 2. Under what circumstances might we optimize the minimum or maximum values rather than the average? - A) To guarantee that all users get good service - B) To maximize CPU utilization - C) To minimize throughput - D) To increase response time 1. In some cases, we prefer to optimize the __________ or maximum values rather than the average. - minimum 2. Optimizing the maximum response time guarantees that all users get __________. - good service 1. Optimizing the maximum response time guarantees that all users get good service. (True) 2. The goal of scheduling algorithm optimization is to minimize CPU utilization. (False) 1. Define Optimization Criteria. - Optimization Criteria are the factors used to evaluate and compare different scheduling algorithms, such as CPU utilization, throughput, and turnaround time. 2. What is the purpose of minimizing waiting time in scheduling algorithms? - The purpose of minimizing waiting time is to ensure that processes spend as little time as possible waiting in the ready queue, improving overall system efficiency. 1. What is the main focus of the "Scheduling Algorithms" section? - A) To discuss memory management techniques - B) To describe different CPU scheduling algorithms - C) To explain process synchronization - D) To describe deadlock prevention 2. Which of the following is NOT a scheduling algorithm discussed in this section? - A) First-Come, First-Served (FCFS) - B) Shortest-Job-First (SJF) - C) Round Robin (RR) - D) Memory Management 1. The scheduling algorithms are described in the context of a __________ CPU core. - single 2. The FCFS scheduling algorithm is __________, meaning once the CPU is allocated to a process, it keeps the CPU until it releases it. - nonpreemptive 1. Modern CPU architectures always have a single processing core. (False) 2. The FCFS scheduling algorithm is particularly troublesome for interactive systems. (True) 1. Define Scheduling Algorithm. - A Scheduling Algorithm is a method used by the operating system to decide which process will be allocated the CPU at any given time. 2. What is the role of the FCFS Scheduling Algorithm? - The FCFS Scheduling Algorithm allocates the CPU to the process that requests it first, and the process keeps the CPU until it releases it by terminating or switching to the waiting state. | P a g e 17 1. Which of the following is a disadvantage of the FCFS scheduling algorithm? - A) It is complex to implement - B) It results in a short average waiting time - C) It can lead to long waiting times for short processes - D) It is preemptive 2. What is the convoy effect in FCFS scheduling? - A) Short processes get stuck behind long processes - B) Long processes get stuck behind short processes - C) The CPU is always busy - D) The CPU is always idle 1. The FCFS scheduling algorithm is implemented using a __________ queue. - FIFO 2. The convoy effect results in lower __________ and device utilization. - CPU 1. The convoy effect is beneficial for interactive systems. (False) 2. The FCFS scheduling algorithm is optimal for minimizing waiting time. (False) Define Convoy Effect. - The Convoy Effect occurs when short processes get stuck behind long processes in the ready queue, leading to increased waiting times for short processes. Define Waiting Time. - Waiting Time is the amount of time a process has been waiting in the ready queue. What is Turnaround Time? - Turnaround Time is the amount of time to execute a particular process, from submission to completion. Define Average Waiting Time. - Average Waiting Time is the average amount of time that processes spend waiting in the ready queue. What is the role of the FCFS Scheduling Algorithm in dynamic situations? - The FCFS Scheduling Algorithm can lead to the convoy effect, where short processes get stuck behind long processes, resulting in lower CPU and device utilization. 2. What happens to I/O-bound processes when a CPU-bound process holds the CPU in FCFS scheduling? - A) They execute quickly - B) They wait in the ready queue - C) They terminate - D) They switch to the waiting state 1. The __________ results in lower CPU and device utilization. - convoy effect 2. The__________ will get and hold the CPU during its execution. - CPU-bound process 1. The convoy effect improves CPU and device utilization. (False) 2. The convoy effect is beneficial for interactive systems. (False) 1. Define Convoy Effect. - The Convoy Effect occurs when short processes get stuck behind long processes in the ready queue, leading to increased waiting times for short processes. 2. What is the role of the CPU-bound process in the convoy effect? - The CPU-bound process holds the CPU for an extended period, causing other processes to wait in the ready queue, leading to lower CPU and device utilization. | P a g e 18 1. What is the SJF scheduling algorithm? - A) The process with the longest next CPU burst is scheduled first - B) The process with the shortest next CPU burst is scheduled first - C) The process that arrives first is scheduled first - D) The process with the highest priority is scheduled first 2. What is another name for the SJF scheduling algorithm? - A) Shortest-Remaining-Time-First - B) Shortest-Next-CPU-Burst - C) First-Come, First-Served - D) Round Robin 1. The SJF scheduling algorithm is also known as the __________ algorithm. - shortest-next-CPU-burst 2. The SJF scheduling algorithm schedules the process with the __________ next CPU burst first. - shortest 1. The SJF scheduling algorithm is nonpreemptive. (True) 2. The SJF scheduling algorithm is optimal for minimizing waiting time. (True) 1. Define SJF Scheduling. - SJF Scheduling is a CPU scheduling algorithm that selects the process with the shortest next CPU burst for execution. 1. What is a disadvantage of the SJF scheduling algorithm? - A) It is complex to implement - B) It results in a short average waiting time - C) It can lead to starvation for long-running processes - D) It is preemptive 2. What is a challenge in implementing the SJF scheduling algorithm? - A) It is difficult to know the length of the next CPU burst - B) It is difficult to implement a priority queue - C) It is difficult to handle I/O-bound processes - D) It is difficult to manage memory allocation 1. The SJF scheduling algorithm can lead to __________ for long-running processes. - starvation 2. The difficulty in implementing the SJF scheduling algorithm is knowing the length of the next __________. - CPU burst 1. The SJF scheduling algorithm is optimal for minimizing waiting time. (True) 2. The SJF scheduling algorithm is easy to implement. (False) 1. Define Starvation. - Starvation occurs when a process is perpetually denied access to the CPU because other processes are always prioritized over it. ----------------------------------------------------------------------------------------------------------------------------------------------- 1. What is the main challenge in implementing the SJF algorithm at the CPU scheduling level? - a) Lack of memory - b) Inability to know the length of the next CPU burst - c) High power consumption - d) Limited number of processes | P a g e 19 2. What is used to predict the length of the next CPU burst? - a) Linear regression - b) Exponential average - c) Random guessing - d) Fixed time quantum 1. What is the key difference between preemptive and non-preemptive SJF scheduling? - a) Preemptive SJF allows processes to run indefinitely - b) Preemptive SJF can interrupt a running process if a shorter burst arrives - c) Non-preemptive SJF always preempts the current process - d) Non-preemptive SJF is not optimal 2. What is another name for preemptive SJF scheduling? - a) First-Come, First-Served (FCFS) - b) Shortest-Remaining-Time-First (SRTF) - c) Round Robin (RR) - d) Priority Scheduling 1. Preemptive SJF scheduling is also known as Shortest-Remaining-Time-First scheduling. - True 2. Non-preemptive SJF scheduling always preempts the current process when a new process arrives. - False (Non-preemptive SJF allows the current process to finish its burst.) 1. Define preemptive SJF scheduling. - Answer: Preemptive SJF scheduling is a CPU scheduling algorithm where the currently running process can be interrupted if a new process arrives with a shorter CPU burst. 2. What is the main advantage of preemptive SJF over non-preemptive SJF? - Answer: Preemptive SJF can reduce waiting time by allowing shorter processes to execute immediately. --- 1. What is the waiting time of a process? - Answer: The waiting time of a process is the total time the process spends waiting in the ready queue before it gets executed. 2. What is the turnaround time of a process? - Answer: The turnaround time of a process is the total time taken from the arrival of the process to its completion. --- 1. In non-preemptive SJF, if two processes have the same CPU burst length, FCFS is used to break the tie. - True 2. The turnaround time for process P2 in the example is 10 milliseconds. - True --- 1. Define non-preemptive SJF scheduling. - Answer: Non-preemptive SJF scheduling is a CPU scheduling algorithm where the currently running process is allowed to finish its CPU burst before a new process is scheduled. 2. What is the main disadvantage of non-preemptive SJF scheduling? - Answer: It may lead to longer waiting times for shorter processes if a long process is already running. --- Definitions: 1. What is the key advantage of preemptive SJF scheduling? - Answer: It reduces waiting time for shorter processes by allowing them to execute immediately. | P a g e 20 2. What is the main challenge of preemptive SJF scheduling? - Answer: It requires frequent context switches, which can increase overhead. 1. What is the time quantum in Round Robin (RR) scheduling? - a) A fixed time interval for each process - b) A small unit of time allocated to each process - c) The total time a process can run - d) The time taken for a context switch 2. What happens if a process's CPU burst exceeds the time quantum in RR scheduling? - a) The process is terminated - b) The process is preempted and added to the end of the ready queue - c) The process is given another time quantum immediately - d) The process is moved to the waiting queue 1. In Round Robin scheduling, each process gets a small unit of CPU time called a………….. time quantum. 1. Round Robin scheduling is a preemptive algorithm. - True 2. In RR scheduling, a process can run for multiple time quanta in a row if it is the only runnable process. - True --- 1. What happens when the timer goes off in Round Robin scheduling? - a) The process is terminated - b) A context switch occurs, and the process is moved to the end of the ready queue - c) The process is given another time quantum immediately - d) The process is moved to the waiting queue 2. What is the role of the CPU scheduler in Round Robin scheduling? - a) To terminate processes - b) To pick the next process from the ready queue and set a timer - c) To allocate memory to processes - d) To manage I/O operations 1. In Round Robin scheduling, a process may voluntarily release the CPU if its burst is shorter than the time quantum. - True 2. The CPU scheduler in Round Robin scheduling does not use a timer. - False (A timer is used to interrupt the process after the time quantum.) 1. In Round Robin scheduling, the time quantum \(q\) should be much larger than the context switch time. - True 2. Round Robin scheduling is designed for real-time systems. - False (It is designed for timesharing systems.) 1. A smaller time quantum generally leads to a lower average turnaround time. - False (It leads to a higher average turnaround time due to more context switches.) 2. In the example on Page 16, the average turnaround time is better with a larger time quantum. - True 1. A time quantum that is too small can lead to poor response times. - False (It leads to high overhead due to frequent context switches.) | P a g e 21 2. A time quantum that is too large can lead to poor response times. – True 1. Why can't the SJF algorithm be implemented at the level of CPU scheduling? o a) It is too complex. o b) There is no way to know the length of the next CPU burst. o c) It requires too much memory. o d) It is not efficient. 2. The next CPU burst is generally predicted as: o a) A linear average of the measured lengths of previous CPU bursts. o b) An exponential average of the measured lengths of previous CPU bursts. o c) A random value. o d) The same as the previous burst. Exponential Average: A method to predict the length of the next CPU burst based on the measured lengths of previous CPU bursts. 1. We may not know the length of the next CPU burst, but we may be able to ______ its value. o predict 2. By computing an approximation of the length of the next CPU burst, we can then pick the process with the shortest______ CPU burst. o predicted 1. The SJF algorithm can be: o a) Preemptive. o b) Non-preemptive. o c) Both a and b. o d) Neither a nor b. 2. A preemptive SJF algorithm will: o a) Allow the currently running process to finish its CPU burst. o b) Preempt the currently executing process. o c) Ignore the newly arrived process. o d) Terminate the currently executing process. Preemptive SJF: A scheduling algorithm where the currently executing process can be preempted if a new process with a shorter CPU burst arrives. 1. The choice between preemptive and non-preemptive SJF arises when a new process arrives at the ready queue while a previous process is still _______. o executing 2. Preemptive SJF scheduling is sometimes called shortest-remaining-time-first _______. o scheduling Shortest-Remaining-Time-First (Preemptive SJF): A scheduling algorithm where the currently executing process can be preempted if a new process with a shorter CPU burst arrives. Non-preemptive SJF: A scheduling algorithm where the currently running process is allowed to finish its CPU burst before any new process is scheduled. Preemptive SJF: A scheduling algorithm where the currently executing process can be preempted if a new process with a shorter CPU burst arrives. | P a g e 22 1. The Round Robin (RR) scheduling algorithm is: a) Non-preemptive. b) Preemptive. c) Cooperative. d) Both a and c. 2. In RR scheduling, each process gets a small unit of CPU time called: a) Time quantum. b) Time slice. c) Time interval. d) Both a and b. Round Robin (RR) Scheduling: A CPU scheduling algorithm similar to FCFS scheduling, but with preemption added to enable the system to switch between processes. 1. In RR scheduling, no process is allocated the CPU for more than 1 time quantum in a row unless it is the only _______ process. runnable 2. If a process’s CPU burst exceeds 1 time quantum, that process is preempted and is put back in the _______ queue. ready 1. In RR scheduling, if the CPU burst of the currently running process is longer than 1 time quantum, what happens? a) The process continues to execute. b) The timer will go off and cause an interrupt to the operating system. c) The process is terminated. d) The process is moved to the I/O queue. 2. The CPU scheduler in RR scheduling will: a) Select the process with the shortest CPU burst. b) Select the next process in the ready queue. c) Select the process with the highest priority. d) Select the process with the longest CPU burst. Context Switch: The process of saving the state of a currently running process and loading the state of the next process to be executed. 1. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and _______ the process. dispatches 2. If the CPU burst of the currently running process is longer than 1 time quantum, a context switch will be executed, and the process will be put at the _______ of the ready queue. tail In RR scheduling, if there are ( n ) processes in the ready queue and the time quantum is ( q ), each process gets: a) ( 1/2 ) of the CPU time. b) ( 1/3 ) of the CPU time. c) ( 1/n ) of the CPU time. d) ( n ) times the CPU time. | P a g e 23 2. The time quantum ( q ) should be: a) Smaller than the context switch time. b) Large compared to the context switch time. c) Equal to the context switch time. d) Irrelevant to the context switch time. Time Quantum: A small unit of CPU time defined in the Round Robin scheduling algorithm, usually ranging from 10 to 100 milliseconds. 1. No process waits more than ((n-1) \times q) time units until its next time _______. quantum 2. The RR scheduling algorithm is designed especially for _______ systems. timesharing Turnaround Time: The total time taken from the submission of a process to the completion of the process. Gantt Chart: A visual representation of the schedule of processes, showing the start and finish times of each process. 1. The performance of the RR algorithm depends heavily on the size of the: a) Context switch time. b) Time quantum. c) CPU burst. d) I/O burst. 2. If the time quantum is extremely large, the RR policy is the same as: a) FCFS policy. b) SJF policy. c) Priority scheduling. d) Multilevel queue scheduling. Context Switch: The process of saving the state of a currently running process and loading the state of the next process to be executed. 1. If the time quantum is extremely small (say, 1 millisecond), the RR approach can result in a large number of _______ switches. context 2. The time required for a context switch is less than 10 _______. microseconds 1. If we have only one process of 10 time units and ( q = 12 ), the process finishes in: a) 2 time quanta. b) Less than 1 time quantum. c) 3 time quanta. d) 4 time quanta. 2. If ( q = 1 ), then how many context switches will occur for a process of 10 time units? a) 1 context switch. b) 2 context switches. c) 5 context switches. d) 9 context switches. | P a g e 24 Time Quantum: A small unit of CPU time defined in the Round Robin scheduling algorithm, usually ranging from 10 to 100 milliseconds. 1. If ( q = 6 ), the process requires 2 quanta, resulting in a context _______. switch 2. We want the time quantum to be large with respect to the context _______ time. switch Turnaround time depends on: a) The number b) of processes. size of the context switch. c) The size of the time quantum. d) The priority of the processes. 2. If the time quantum is 10 and there are three processes of 10 time units each, the average turnaround time is: a) 30. b) 20. c) 10. d) 40. Turnaround Time: The total time taken from the submission of a process to the completion of the process. 1. The average turnaround time can be improved if most processes finish their next CPU burst in a single time _______. quantum 2. If context-switch time is added in, the average turnaround time increases even more for a smaller time quantum, since more context _______ are required. switches 1. A rule of thumb for the time quantum is that: a) 50% of CPU bursts should be shorter than the time quantum. b) 60% of CPU bursts should be shorter than the time quantum. c) 70% of CPU bursts should be shorter than the time quantum. d) 80% of CPU bursts should be shorter than the time quantum. 2. If the time quantum is too small, the average turnaround time: a) Decreases. b) Increases. c) Remains the same. d) Becomes zero. Time Quantum: A small unit of CPU time defined in the Round Robin scheduling algorithm, usually ranging from 10 to 100 milliseconds. 1. A rule of thumb is that 80 percent of the CPU bursts should be shorter than the time _______. quantum 2. If the time quantum is too small, the average turnaround time increases because more context _______ are required. switches Round Robin (RR) Scheduling: A CPU scheduling algorithm where each process gets a small unit of CPU time (time quantum) and is preempted if it exceeds this time, then placed at the end of the ready queue. | P a g e 25 1. In priority scheduling, the CPU is allocated to the process with: a) The lowest priority number. b) The highest priority (smallest integer). c) The longest burst time. d) The shortest burst time. 2. Priority scheduling can be: a) Preemptive. b) Non-preemptive. c) Both a and b. d) Neither a nor b. Priority Scheduling: A CPU scheduling algorithm where each process is assigned a priority number, and the CPU is allocated to the process with the highest priority. 1. A preemptive priority scheduling algorithm will preempt the currently running process if the priority of the newly arrived process is _______ than the priority of the currently running process. higher 2. Equal-priority processes are scheduled in _______ order. FCFS Non-preemptive Priority Scheduling: A scheduling algorithm where the CPU is allocated to the process with the highest priority, and the currently running process is not preempted. Preemptive Priority Scheduling: A scheduling algorithm where the CPU is allocated to the process with the highest priority, and the currently running process can be preempted if a new process with a higher priority arrives. 1. A major problem with priority scheduling algorithms is: a) High CPU utilization. b) Indefinite blocking or starvation. c) Low throughput. d) High turnaround time. 2. A solution to the problem of indefinite blockage of low-priority processes is: a) Aging. b) Preemption. c) FCFS scheduling. d) Round Robin scheduling. Starvation: A situation where low-priority processes may never execute because higher-priority processes keep arriving. 1. Aging involves gradually increasing the priority of processes that wait in the system for a long _______. time 2. Another option is to combine RR and priority scheduling in such a way that the system executes the highest- priority process and runs processes with the same priority using _______ scheduling. RR Priority Scheduling with Round-Robin: A scheduling algorithm where processes with the highest priority are run first, and processes with the same priority are executed in a round-robin fashion. | P a g e 26 1. In multilevel queue scheduling, processes are: a) Placed in a single queue. b) Placed in separate queues based on priority. c) Scheduled randomly. d) Scheduled based on arrival time. 2. Multilevel queue scheduling works well when combined with: a) FCFS scheduling. b) Round-robin scheduling. c) SJF scheduling. d) Priority scheduling. Multilevel Queue Scheduling: A scheduling algorithm where processes are divided into several separate queues based on priority or process type, and each queue has its own scheduling algorithm. 1. With priority scheduling, it is easier to have separate queues for each distinct _______. priority 2. If there are multiple processes in the highest-priority queue, they are executed in _______ order. round-robin 1. In multilevel queue scheduling, foreground processes are typically: a) Interactive processes. b) Batch processes. c) System processes. d) Real-time processes. 2. Foreground processes may have priority over: a) Background processes. b) System processes. c) Real-time processes. d) Interactive processes. Foreground Processes: Interactive processes that require quick response times and are given higher priority in scheduling. 1. Separate queues might be used for foreground and background processes, and each queue might have its own scheduling _______. algorithm 2. The foreground queue might be scheduled by an RR algorithm, while the background queue is scheduled by an _______ algorithm. FCFS 1. In multilevel queue scheduling, each queue has: a) Equal priority. b) Absolute priority over lower-priority queues. c) No priority. d) Random priority. 2. If an interactive process enters the ready queue while a batch process is running, the batch process will be: a) Continued. b) Preempted. c) Terminated. d) Ignored. | P a g e 27 Batch Processes: Processes that are typically run in the background and have lower priority compared to interactive or real-time processes. No process in the batch queue could run unless the queues for real-time processes, system processes, and interactive processes were all _______. Empty If an interactive process entered the ready queue while a batch process was running, the batch process would be _______. preempted 1. Multilevel feedback queue scheduling allows a process to: a) Stay in the same queue. b) Move between various queues. c) Be terminated. d) Be ignored. 2. Aging can be implemented using: a) Multilevel feedback queue. b) FCFS scheduling. c) SJF scheduling. d) Priority scheduling. Multilevel Feedback Queue Scheduling: A scheduling algorithm that allows processes to move between various queues based on their characteristics and behavior. 1. Multilevel-feedback-queue scheduler is defined by the number of queues, scheduling algorithms for each queue, and methods used to determine when to upgrade or _______ a process. demote 2. Aging involves gradually increasing the priority of processes that wait in the system for a long _______. time 1. To prevent starvation, a process that waits too long in a lower-priority queue may gradually be moved to a higher-priority _______. queue 1. When multiple CPUs are available, what becomes possible? a) Load sharing, where multiple threads may run in parallel. b) Single-threaded execution. c) Reduced system complexity. d) Increased memory usage. 2. Multiprocessor systems can include which of the following architectures? a) Multicore CPUs. b) Multithreaded cores. c) NUMA systems. d) All of the above. Multiple-Processor Scheduling: The process of scheduling tasks in a system with multiple CPUs, allowing for load sharing and parallel execution of threads. 1. CPU scheduling is more complex when multiple _______ are available. CPUs 2. Multiprocessor systems may include multicore CPUs, multithreaded cores, NUMA systems, and _______ multiprocessing. heterogeneous | P a g e 28 1. In asymmetric multiprocessing, all scheduling decisions are handled by: a) A single processor (master server). b) Multiple processors. c) The operating system. d) The user. 2. The standard approach for supporting multiprocessors is: a) Asymmetric multiprocessing. b) Symmetric multiprocessing (SMP). c) Single-threaded processing. d) None of the above. Asymmetric Multiprocessing: A multiprocessor system where all scheduling decisions, I/O processing, and other system activities are handled by a single processor, while other processors execute only user code. 1. The advantage of asymmetric multiprocessing is that it is simple because only one core accesses the system data structures, reducing the need for data _______. sharing 2. The disadvantage of asymmetric multiprocessing is that the master server becomes a potential _______ where overall system performance may be reduced. bottleneck 1. In symmetric multiprocessing (SMP), each processor: a) Is self-scheduling. b) Depends on a master server for scheduling. c) Executes only user code. d) Is idle. 2. In SMP, threads can be organized in which of the following ways? a) All threads in a common ready queue. b) Each processor with its own private queue of threads. c) Both a and b. d) None of the above. Symmetric Multiprocessing (SMP): A multiprocessor system where each processor is self-scheduling and can independently select threads to run from a common ready queue or its own private queue. 1. In SMP, each processor may have its own private queue of _______. threads 2. SMP is the most common approach on systems supporting _______. multiprocessors 1. Recent trends in processor design include placing multiple processor cores on: a) The same physical chip. b) Separate physical chips. c) Different systems. d) None of the above. 2. Multithreaded processing cores help to remedy the situation of: a) Memory stalls. b) CPU overload. c) I/O bottlenecks. d) Network latency. | P a g e 29 Memory Stall: A situation where a processor spends a significant amount of time waiting for data to become available from memory. 1. Multithreaded processing cores allow the core to switch to another thread if one hardware thread _______ while waiting for memory. stalls 2. Placing multiple processor cores on the same physical chip is faster and consumes less _______ than systems in which each CPU has its own physical chip. power 1. Multithreaded multicore systems take advantage of memory stalls by: a) Making progress on another thread while memory retrieval happens. b) Halting all processes. c) Increasing the memory size. d) Reducing the number of threads. 2. From an operating system perspective, each hardware thread appears as: a) A physical CPU. b) A logical CPU. c) A memory unit. d) An I/O device. Multithreaded Multicore System: A system where each core has multiple hardware threads, allowing the core to switch between threads to make progress while waiting for memory retrieval. If one thread has a memory stall, the core can switch to another _______. thread From an operating system perspective, each hardware thread appears as a logical CPU that is available to run a _______ thread. software 1. Chip-multithreading (CMT) assigns each core: a) A single hardware thread. b) Multiple hardware threads. c) No hardware threads. d) A single software thread. 2. On a quad-core system with 2 hardware threads per core, the operating system sees: a) 4 logical processors. b) 6 logical processors. c) 8 logical processors. d) 10 logical processors. Chip-Multithreading (CMT): A technology that assigns each core multiple hardware threads, allowing for better utilization of the core's resources. Intel processors, such as the i7, support two threads per _______. core On a quad-core system with 2 hardware threads per core, the operating system sees _____ logical processors. 8 | P a g e 30 1. A multithreaded, multicore processor requires how many levels of scheduling? a) One. b) Two. c) Three. d) Four. 2. The OS may choose any scheduling algorithm to decide which software thread to run on each: a) Physical core. b) Hardware thread (logical CPU). c) Memory unit. d) I/O device. Multithreaded Multicore Processor: A processor that requires two levels of scheduling: one for the OS to decide which software thread to run on each hardware thread, and another for each core to decide which hardware thread to run. 1. The OS deciding which software thread to run on each hardware thread (logical CPU) is one level of _______. scheduling 2. Each core may use a simple round-robin algorithm to schedule a hardware thread to the _______ core. processing 1. Load balancing attempts to keep the workload: a) Evenly distributed across all processors. b) Concentrated on a single processor. c) Ignored. d) Randomly assigned. 2. Load balancing is necessary only on systems where: a) Each processor has its own private ready queue. b) There is a common run queue. c) There is no scheduling. d) All processors are idle. Load Balancing: The process of keeping the workload evenly distributed across all processors in an SMP system to fully utilize the benefits of having multiple processors. 1. Load balancing is unnecessary on systems with a common _______ queue. run 2. Push migration occurs when a specific task periodically checks the load on each processor and moves threads from overloaded to _______ processors. idle 1. Processor affinity refers to: a) A thread having affinity for a processor due to the cache contents. b) Load balancing. c) Scheduling algorithms. d) Memory allocation. 2. Soft affinity allows the operating system to: a) Attempt to keep a thread running on the same processor. b) Guarantee a thread runs on the same processor. c) Ignore processor affinity. d) Terminate the thread. | P a g e 31 Processor Affinity: The tendency of a thread to run on the same processor to take advantage of the cache contents stored by that processor. Load balancing may affect processor affinity as a thread may be moved from one processor to another to balance _______. loads Hard affinity allows a process to specify a set of processors it may _______ on. run 1. The main-memory architecture of a system can affect: a) Processor affinity issues. b) Scheduling algorithms. c) Load balancing. d) I/O operations. 2. If the OS is NUMA-aware, it will assign memory: a) Randomly. b) Close to the CPU the thread is running on. c) Far from the CPU. d) Based on the thread's priority. Non-Uniform Memory Access (NUMA): A memory architecture where the memory access time depends on the memory location relative to the processor. 1. If the OS is NUMA-aware, it will assign memory close to the CPU the thread is running on, thus providing the thread the fastest possible _______ access. memory 2. The main-memory architecture of a system can affect processor _______ issues. affinity 1. To select a CPU-scheduling algorithm for an OS, we must first: a) Implement the algorithm. b) Determine criteria and evaluate algorithms. c) Randomly choose an algorithm. d) Ignore the criteria. 2. Which of the following is a method for evaluating CPU-scheduling algorithms? a) Deterministic modeling. b) Queueing models. c) Simulations. d) All of the above. Algorithm Evaluation: The process of selecting and evaluating CPU-scheduling algorithms based on defined criteria such as CPU utilization, response time, and throughput. Criteria for selecting a CPU-scheduling algorithm may include maximizing CPU utilization under the constraint that the maximum response time is _______ milliseconds. 300 Once the selection criteria have been defined, we want to evaluate the algorithms under consideration using methods such as deterministic modeling, queueing models, simulations, and _______. Implementation 1. Deterministic modeling is a type of: a) Analytic evaluation. b) Simulation. c) Implementation. d) Random evaluation. | P a g e 32 2. Deterministic modeling requires: a) Random inputs. b) Exact numbers for input. c) Approximate values. d) No inputs. Deterministic Modeling: A type of analytic evaluation that takes a predetermined workload and defines the performance of each algorithm for that workload. 1. Deterministic modeling is simple and fast, and it gives us exact numbers, allowing us to compare the _______. algorithms 2. The disadvantage of deterministic modeling is that it requires exact numbers for input, and its answers apply only to those _______. Inputs 1. For the given workload, which scheduling algorithm gives the minimum average waiting time? a) FCFS. b) Non-preemptive SJF. c) Round Robin. d) Priority scheduling. 2. The average waiting time obtained with the SJF policy is: a) Less than half that obtained with FCFS scheduling. b) More than that obtained with FCFS scheduling. c) Equal to that obtained with FCFS scheduling. d) None of the above. Average Waiting Time: The average time that processes spend waiting in the ready queue before getting executed by the CPU. 1. The average waiting time obtained with the SJF policy is less than half that obtained with _______ scheduling. FCFS 1. Queueing models describe the arrival-time of processes and CPU and I/O bursts: a) Probabilistically. b) Deterministically. c) Randomly. d) Statistically. 2. Queueing-network analysis involves: a) Describing a computer system as a network of servers, each with a queue of waiting processes. b) Ignoring the arrival rates. c) Randomly assigning processes to servers. d) None of the above. Queueing Models: Models that describe the arrival-time of processes and CPU and I/O bursts probabilistically, often using exponential distributions. Knowing arrival rates and service rates, we can compute utilization, average queue length, average wait time, etc., using _______ theory. queueing Queueing models are often only approximations of real systems, and the accuracy of the computed results may be _______. questionable | P a g e 33 1. Little’s formula is used to compute: a) Average queue length, average waiting time, and average arrival rate. b) CPU utilization. c) Turnaround time. d) Response time. 2. Little’s formula is valid for: a) FCFS scheduling only. b) SJF scheduling only. c) Any scheduling algorithm and arrival distribution. d) Round Robin scheduling only. Little’s Formula: A formula used to compute the average queue length, average waiting time, and average arrival rate in a queueing system. 1. To get a more accurate evaluation of scheduling algorithms, we can use: a) Simulations. b) Deterministic modeling. c) Queueing models. d) Random evaluation. 2. Simulations involve: a) No data structures. b) Programmed models of computer systems. c) Random inputs. d) No variables. Simulations: A method to evaluate scheduling algorithms by simulating a computer system with programmed models and data structures representing system components. 1. Running simulations involves gathering statistics indicating algorithm _______. performance 2. Trace files provide an excellent way to compare two algorithms on exactly the same set of real _______. inputs 1. An advantage of running simulations is that they are: a) More accurate than queueing models. b) Less accurate than queueing models. c) Easier to implement. d) Less expensive. 2. A disadvantage of running simulations is that they: a) Require no storage. b) Can be expensive and require large amounts of storage. c) Are quick to run. d) Are easy to implement. Trace Files: Files that record sequences of real events in real systems, used to drive simulations and compare scheduling algorithms. Simulations can be expensive, often requiring many hours of computer _______. time The design, coding, and debugging of the simulator can be a major _______. task | P a g e 34 1. The only completely accurate way to evaluate a scheduling algorithm is to: a) Use deterministic modeling. b) Use queueing models. c) Implement it and test in real systems. d) Use simulations. 2. A disadvantage of implementing a new scheduler is that it: a) Is expensive and high risk. b) Is quick and easy. c) Requires no testing. d) Is always successful. Implementation: The process of coding a scheduling algorithm, putting it in the operating system, and testing it in real systems to evaluate its performance. 1. The best way to compare algorithms is to implement them on real _______. machines 2. The expense of implementation is incurred in coding the algorithm and modifying the OS to support it, as well as in _______ the changes. testing 1. A cooperating process is one that: a) Can affect or be affected by other processes executing in the system. b) Runs independently of other processes. c) Does not share data with other processes. d) Executes on a single processing core. 2. Cooperating processes can share data through: a) Shared memory or message passing. b) Independent memory spaces. c) Separate I/O devices. d) Different operating systems. Cooperating Process: A process that can affect or be affected by other processes executing in the system, and can share data through shared memory or message passing. 1. Processes can execute concurrently or in parallel, and the CPU scheduler switches rapidly between processes to provide _______ execution. concurrent 2. Maintaining data consistency requires mechanisms to ensure the orderly execution of _______ processes. cooperating 1. The Bounded Buffer problem allows at most how many items in the buffer at the same time? a) BUFFER_SIZE. b) BUFFER_SIZE - 1. c) BUFFER_SIZE + 1. d) BUFFER_SIZE / 2. 2. The Producer-Consumer problem focuses on: a) Independent processes. b) Process synchronization and coordination among cooperating processes. c) Memory allocation. d) I/O operations. | P a g e 35 Bounded Buffer Problem: A classic synchronization problem where a buffer of fixed size is shared between a producer and a consumer, allowing at most BUFFER_SIZE - 1 items in the buffer at the same time. 1. The Producer-Consumer problem can be solved by using an integer counter that keeps track of the number of _______ buffer items. full 2. The counter is incremented by the producer after it produces a new buffer item and decremented by the consumer after it _______ a buffer item. consumes 1. The Producer-Consumer problem can lead to: a) Race conditions. b) Deadlocks. c) Memory leaks. d) I/O bottlenecks. 2. To guard against race conditions, we need to ensure that: a) Multiple processes can manipulate the variable counter simultaneously. b) Only one process at a time can manipulate the variable counter. c) The counter is never updated. d) The buffer size is unlimited. Race Condition: A situation where several processes access and manipulate the same data concurrently, and the outcome of the execution depends on the particular order in which the access takes place. 1. The producer and consumer routines may not function correctly when executed _______. concurrently 2. To guard against the race condition, we need to ensure that only one process at a time can be manipulating the variable _______. counter 1. The code for the producer process includes a loop that waits while the counter is equal to: a) 0. b) BUFFER_SIZE. c) -1. d) BUFFER_SIZE / 2. 2. The code for the consumer process includes a loop that waits while the counter is equal to: a) 0. b) BUFFER_SIZE. c) -1. d) BUFFER_SIZE / 2. Producer-Consumer Problem: A classic synchronization problem where a producer generates data and places it in a buffer, and a consumer takes the data from the buffer, requiring synchronization to avoid race conditions. 1. The producer process increments the counter after it produces a new buffer item, and the consumer process decrements the counter after it _______ a buffer item. consumes 2. When several processes access and manipulate the same data concurrently and the outcome depends on the order of access, it is called a _______ condition. race | P a g e 36 1. The counter increment (counter++) could be implemented as: a) register1 = counter; register1 = register1 - 1; counter = register1. b) register1 = counter; register1 = register1 + 1; counter = register1. c) register2 = counter; register2 = register2 + 1; counter = register2. d) register2 = counter; register2 = register2 - 1; counter = register2. 2. The counter decrement (counter--) could be implemented as: a) register1 = counter; register1 = register1 + 1; counter = register1. b) register2 = counter; register2 = register2 - 1; counter = register2. c) register1 = counter; register1 = register1 - 1; counter = register1. d) register2 = counter; register2 = register2 + 1; counter = register2. Race Condition: A situation where several processes access and manipulate the same data concurrently, and the outcome of the execution depends on the particular order in which the access takes place. 1. The CPU may be taken from the process before the execution of all the three statements, leading to a _______ condition. race 2. The value of the variable counter may be 4, 5, or 6, but the only correct result is counter == _______. 5 1. In the kernel, processes P0 and P1 creating child processes using the fork() system call can lead to a race condition on which variable? a) process_id. b) next_available_pid. c) parent_pid. d) child_pid. 2. To prevent P0 and P1 from accessing the variable next_available_pid simultaneously, what mechanism is required? a) Deadlock. b) Mutual exclusion. c) Starvation. d) Context switching. Race Condition: A situation where several processes access and manipulate the same data concurrently, and the outcome of the execution depends on the particular order in which the access takes place. 1. Unless mutual exclusion is provided to prevent P0 and P1 from accessing the variable next_available_pid, it is possible the same pid could be assigned to two different _______. processes 2. The fork() system call returns the process identifier of the newly created process to the _______ process. parent 1. The critical section problem involves designing a protocol that processes can use to: a) Cooperate and share data. b) Execute independently. c) Avoid memory allocation. d) Perform I/O operations. 2. When one process is executing in its critical section, no other process is allowed to: a) Execute in its critical section. b) Access memory. c) Perform I/O operations. | P a g e 37 Critical Section: A segment of code in which a process may be accessing and updating shared data, and no other process is allowed to execute in its critical section simultaneously. 1. Each process has a segment of code called the critical section, in which the process may be accessing and updating _______ data. shared 2. The critical section problem is to design a protocol that the processes can use to synchronize their activity and cooperatively share _______. data 1. Each process must ask permission to enter its critical section in the: a) Entry section code. b) Exit section code. c) Remainder section code. d) Initialization code. 2. The critical section may be followed by an: a) Exit section. b) Entry section. c) Initialization section. d) Termination section. Entry Section: The part of the code where a process requests permission to enter its critical section. 1. The general structure of process Pi ensures that only one process can be in a _______ section. critical 2. The critical section may be followed by an exit section, then a remainder section for the remaining _______. code 1. Which of the following is a requirement for a solution to the critical-section problem? a) Mutual Exclusion. b) Deadlock. c) Starvation. d) Context switching. 2. Bounded Waiting ensures that: a) A limit exists on the number of times other processes can enter their critical sections after a process has made a request to enter its critical section. b) Processes can enter their critical sections indefinitely. c) No process can enter its critical section. d) Processes can only enter their critical sections once. Mutual Exclusion: A requirement for a solution to the critical-section problem, ensuring that if one process is executing in its critical section, no other processes can be executing in their critical sections. 1. Progress ensures that if no process is executing in its critical section and there exist some processes that wish to enter their critical sections, the selection of the process that will enter the critical section next cannot be postponed _______. indefinitely 2. Bounded Waiting ensures that a bound or limit exists on the number of times other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is _______. granted | P a g e 38 1. In a single-core environment, the critical-section problem could be solved by: a) Preventing interrupts from occurring while a shared variable is being modified. b) Allowing multiple processes to access the shared variable simultaneously. c) Ignoring the shared variable. d) Using multiple CPUs. 2. Disabling interrupts in the entry section and enabling interrupts in the exit section is a solution that: a) Is not feasible in a multiprocessor environment. b) Works well in a multiprocessor environment. c) Prevents deadlocks. d) Ensures starvation. Interrupt-based Solution: A solution to the critical-section problem in a single-core environment by preventing interrupts from occurring while a shared variable is being modified. 1. The critical-section problem could be solved simply in a single-core environment if we could prevent _______ from occurring while a shared variable was being modified. interrupts 2. Disabling interrupts in the entry section and enabling interrupts in the exit section is not a feasible solution in a _______ environment. multiprocessor 1. Peterson’s solution is a classic software-based solution to the: a) Deadlock problem. b) Critical-section problem. c) Memory allocation problem. d) I/O scheduling problem. 2. Peterson’s solution is restricted to how many processes? a) One. b) Three. c) Two. d) Four. Peterson’s Solution: A classic software-based solution to the critical-section problem that does not require any special hardware and is restricted to two processes. 1. The variable turn indicates whose turn it is to enter the critical _______. section 2. The flag array is used to indicate if a process is ready to enter its critical _______. Section 1. In Peterson’s solution, the variable turn is used to: a) Indicate whose turn it is to enter the critical section. b) Indicate the process priority. c) Indicate the process state. d) Indicate the process ID. 2. The flag array in Peterson’s solution is used to: a) Indicate the process priority. b) Indicate if a process is ready to enter its critical section. c) Indicate the process state. d) Indicate the process ID. | P a g e 39 Flag Array: An array used in Peterson’s solution to indicate if a process is ready to enter its critical section. 1. In Peterson’s solution, process Pi enters its critical section only if: a) flag[j] = false or turn = i. b) flag[i] = false or turn = j. c) flag[j] = true or turn = j. d) flag[i] = true or turn = i. 2. The structure of process Pi in Peterson’s solution ensures that: a) Multiple processes can be in their critical sections simultaneously. b) Only one process can be in its critical section at a time. c) Processes can enter their critical sections without any conditions. d) Processes can never enter their critical sections. Critical Section: A segment of code in which a process may be accessing and updating shared data, and no other process is allowed to execute in its critical section simultaneously. The structure of process Pi in Peterson’s solution ensures that only one process can be in its critical _______ at a time. section The eventual value of turn determines which of the two processes is allowed to enter its critical section _______. first 1. Peterson’s solution preserves mutual exclusion by ensuring that: a) Each Pi enters its critical section only if flag[j] = false or turn = i. b) Each Pi enters its critical section only if flag[i] = true or turn = j. c) Each Pi enters its critical section only if flag[j] = true or turn = j. d) Each Pi enters its critical section only if flag[i] = false or turn = i. 2. In Peterson’s solution, if both processes are ready to execute their critical sections at the same time, then: a) Either turn = i or turn = j. b) Both turn = i and turn = j. c) Neither turn = i nor turn = j. d) turn = i + j. Mutual Exclusion: A requirement for a solution to the critical-section problem, ensuring that if one process is executing in its critical section, no other processes can be executing in their critical sections. 1. The progress requirement in Peterson’s solution is satisfied because: a) If Pj is not ready to enter its critical section, then flag[j] = false, and Pi can enter its critical section. b) If Pi is not ready to enter its critical section, then flag[i] = true, and Pj can enter its critical section. c) If both processes are ready to enter their critical sections, neither can enter. d) If both processes are not ready to enter their critical sections, both can enter. 2. In Peterson’s solution, if turn = i, then: a) Pi will enter its critical section. b) Pj will enter its critical section. c) Both Pi and Pj will enter their critical sections. d) Neither Pi nor Pj will enter their critical sections. Progress Requirement: A requirement for a solution to the critical-section problem, ensuring that if no process is executing in its critical section and there exist some processes that wish to enter their critical sections, the selection of the process that will enter the critical section next cannot be postponed indefinitely. | P a g e 40 1. The progress requirement is satisfied in Peterson’s solution because if Pj is not ready to enter its critical section, then flag[j] = false, and Pi can enter its critical _______. section 2. If both processes are ready to execute their critical sections at the same time, then either turn = i or turn = j, and some waiting process can enter its critical _______. section 1. The bounded-waiting requirement in Peterson’s solution is met because: a) Once Pj exits its critical section, it will set flag[j] = false, allowing Pi to enter its critical section. b) Once Pi exits its critical section, it will set flag[i] = true, allowing Pj to enter its critical section. c) Both Pi and Pj can enter their critical sections simultaneously. d) Neither Pi nor Pj can enter their critical sections. 2. In Peterson’s solution, Pi will enter the critical section after at most how many entries by Pj? a) Two. b) Three. c) One. d) Four. Bounded Waiting: A requirement for a solution to the critical-section problem, ensuring that a bound or limit exists on the number of times other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. 1. Once Pj exits its critical section, it will set flag[j] = false, allowing Pi to enter its critical _______. section 2. Pi will enter the critical section (progress) after at most one entry by Pj (bounded _______). waiting 1. Peterson’s solution is not guaranteed to work on modern computer architectures because: a) It is too complex. b) Processors and/or compilers may reorder operations that have no dependencies. c) It requires special hardware. d) It is too slow. 2. For single-threaded applications, reordering of instructions is: a) Problematic. b) Acceptable as the result will always be the same. c) Not allowed. d) Always produces inconsistent results. Instruction Reordering: The process by which processors and/or compilers may reorder operations that have no dependencies to improve system performance. 1. For multithreaded applications with shared data, the reordering of instructions may produce inconsistent or unexpected _______. results 2. Understanding why Peterson’s solution will not work on modern architectures is useful for better understanding _______ conditions. race Reordering of Instructions: The process by which a processor may change the order of instructions to improve performance, which can lead to inconsistent results in multithreaded applications. | P a g e 41 1. If there are no data dependencies between the variables flag and x, it is possible that a processor may: a) Ignore the instructions. b) Reorder the instructions. c) Execute the instructions in order. d) Terminate the process. 2. If the processor reorders the statements issued by Thread 1, it may: a) Load the variable x before loading the value of flag. b) Load the value of flag before loading the variable x. c) Ignore the variable x. d) Ignore the value of flag. Data Dependencies: Relationships between instructions that dictate the order in which they must be executed to produce correct results. 1. If there are no data dependencies between the variables flag and x, it is possible that a processor may reorder the instructions for Thread 2 so that flag is assigned true before the assignment of x = _______. 100 2. If the processor reorders the statements issued by Thread 1, it may load the variable x before loading the value of _______. flag 1. The effects of instruction reordering in Peterson’s solution can lead to: a) Both threads being active in their critical sections at the same time. b) Only one thread being active in its critical section. c) Neither thread being active in its critical section. d) Deadlock. 2. To ensure that Peterson’s solution works correctly on modern computer architectures, we must use: a) Memory barriers. b) Deadlock prevention. c) Starvation avoidance. d) Context switching. Memory Barrier: A mechanism used to ensure that instructions are executed in the correct order, preventing reordering of operations that could lead to inconsistent results in multithreaded applications. 1. The variables flag[] and turn are independent, so they might be _______. reordered 2. To ensure that Peterson’s solution works correctly on modern computer architectures, we must use _______ barriers. memory ---------------------------------------------------------------------------