Operating Systems Semester Exam

InexpensiveCentaur avatar
InexpensiveCentaur
·
·
Download

Start Quiz

Study Flashcards

26 Questions

Which type of operating system allows many users to share the computer simultaneously and gives each user the impression that the entire system is dedicated to them?

Time Sharing System

Spooling is a process where data is permanently stored in memory for long-term use.

False

What are the primary goals of an operating system?

Convenience / user friendly

______ involves handling the creation, scheduling, and termination of processes, which are executing programs.

Process Management

What is the main difference between Symmetric Processing and Asymmetric Processing?

Each processor is assigned a specific task or role in Asymmetric Processing.

What is the primary function of a Real Time Operating System (RTOS)?

To process jobs within defined time constraints

Soft Real-Time Operating Systems can afford to miss deadlines occasionally.

True

Match the following system calls with their categories:

Create file, delete file = File management Create, delete communication connection = Communications Allocate and free memory = Process control Get time or date, set time or date = Information maintenance

What are the two main modes of operation needed in an operating system?

User mode and Kernel mode

What is the purpose of the mode bit in the hardware of a computer?

indicate the current mode: kernel (0) or user (1)

Which of the following sections are part of a process?

Data Section

A program is a passive entity by default.

True

A process becomes active when an executable file is loaded into ________ memory.

main

Match the following process control block (PCB) information with its description:

Process state = Defines whether the process is new, ready, running, waiting, halted, etc. CPU-scheduling information = Includes process priority, pointers to scheduling queues, etc. Memory-management information = Contains data like base and limit registers, page tables, etc. I/O status information = Includes the list of I/O devices allocated to the process, open files, etc.

Define CPU utilization.

Keeping the CPU as busy as possible

What is meant by throughput in CPU scheduling?

Number of processes completed per time unit

Explain waiting time in the context of CPU scheduling.

Sum of periods spent waiting in the ready queue

What is the response time in CPU scheduling?

Time it takes to start responding

Define Turn Around Time (TAT) in the context of CPU scheduling.

Completion Time - Arrival Time

What is the formula to calculate Waiting Time?

Turn Around Time - Burst Time

What scheduling criteria should be considered when selecting a CPU-scheduling algorithm?

Response time

What is the main problem with multi-level queue scheduling?

How to decide number of ready queues, scheduling algorithm inside the queue, and between the queues; and once a process enters a specific queue, it cannot change to another queue.

Which of the following is a characteristic of a multilevel feedback queue scheduling algorithm?

Processes are moved between queues based on CPU burst characteristics.

The multilevel feedback queue scheduling algorithm prevents starvation.

True

Race condition is a situation where the output of a process depends on the ____________ of processes.

execution sequence

Define Mutual Exclusion in the context of the critical section problem.

Mutual Exclusion refers to the rule that only one process is allowed inside the critical section at a time, ensuring exclusive access to shared resources.

Study Notes

Operating System Overview

  • An operating system (OS) acts as an intermediary between users and hardware, managing computer resources and providing a platform for application programs to run.
  • Goals of an OS:
    • Primary goals: Convenience, user-friendly
    • Secondary goals: Efficiency, reliability, maintainability
  • Functions of an OS:
    • Process management: handling process creation, scheduling, and termination
    • Memory management: managing allocation and deallocation of physical and virtual memory
    • I/O device management: handling I/O operations of peripheral devices
    • File management: managing files on storage devices, including information, naming, permissions, and hierarchy
    • Network management: managing network protocols and functions
    • Security and protection: ensuring system protection against unauthorized access and other security threats

OS Structure

  • Major components of an OS:
    • Kernel: central component managing system resources and communication between hardware and software
    • Process management: process scheduler, process control block (PCB), and concurrency control
    • Memory management: physical memory management, virtual memory management, and memory allocation
    • File system management: file handling, file control block, and disk scheduling
    • Device management: device drivers and I/O controllers
    • Security and access control: authentication, authorization, and encryption
    • User interface: command-line interface (CLI) and graphical user interface (GUI)
    • Networking: network protocols and network interface

Classification of Operating Systems

  • Batch operating system: early computers, no interaction, jobs are processed in batches
  • Multiprogramming operating system: multiple jobs in memory, CPU switching between jobs
  • Multiprocessing operating system: multiple CPUs, each processing a separate job
  • Multitasking operating system: multiple tasks, time sharing, and job scheduling
  • Real-time operating system: strict deadlines, predictable responses
  • Distributed operating system: multiple nodes, networked, and loosely coupled

CPU Scheduling

  • Scheduling concepts: process states, process transition diagram, and schedulers
  • Performance criteria: CPU utilization, throughput, and turnaround time
  • Scheduling algorithms: first-come, first-served (FCFS), shortest job first (SJF), and priority scheduling

Concurrent Processes

  • Process concept: principle of concurrency, producer/consumer problem, and mutual exclusion
  • Classical problems in concurrency: dining philosopher problem, sleeping barber problem, and semaphores
  • Interprocess communication models and schemes

Memory Management

  • Basic bare machine: resident monitor, multiprogramming with fixed and variable partitions
  • Protection schemes: paging, segmentation, and paged segmentation
  • Virtual memory concepts: demand paging, performance, and page replacement algorithms

I/O Management and Disk Scheduling

  • I/O devices and I/O subsystems: I/O buffering, disk storage, and disk scheduling
  • File system: file concept, file organization, and file access mechanisms
  • File system implementation issues: file allocation methods, free-space management, and file protection

Microkernel Approach

  • Structures the OS by removing nonessential components from the kernel and implementing them as system and user-level programs
  • Benefits: easier to extend the OS, smaller kernel, and fewer modifications required### Command Interpreters and Interfaces
  • A command interpreter is a special program that runs when a job is initiated or when a user first logs on.
  • On systems with multiple command interpreters, they are known as shells.
  • Examples of shells include Bourne shell, C shell, Bourne-Again shell, and Korn shell.

Graphical User Interfaces (GUIs)

  • A GUI is a user-friendly interface that allows users to interact with the operating system using a mouse and windows.
  • GUIs are characterized by a desktop with icons, files, and directories.
  • Users can interact with the GUI by clicking on icons, selecting files, and pulling down menus.

Interface Choice

  • The choice of whether to use a command-line interface or a GUI is mostly a matter of personal preference.
  • System administrators and power users often use command-line interfaces because they are more efficient.
  • Some systems only provide a subset of system functions via the GUI, leaving the less common tasks to command-line users.

System Calls

  • System calls provide a way for a user program to ask the operating system to perform tasks on its behalf.
  • System calls are an interface to the services provided by an operating system.
  • They are available as routines written in C and C++, and specify the functions and parameters available to an application programmer.

Types of System Calls

  • Process control: end, abort, load, execute, create process, terminate process, etc.
  • File management: create file, delete file, open, close, read, write, reposition, etc.
  • Device management: request device, release device, read, write, reposition, etc.
  • Information maintenance: get time or date, set time or date, get system data, set system data, etc.
  • Communications: create, delete communication connection, send, receive messages, transfer status information, etc.

Modes of Operation

  • Two modes of operation: User mode and Kernel mode (also called supervisor mode, system mode, or privileged mode).
  • A mode bit is added to the hardware to indicate the current mode.

Process

  • A process is a program in execution.
  • A program is not a process by default; it becomes a process when loaded into main memory and its PCB is created.
  • A process consists of text, stack, data, and heap sections.
  • A process can be in one of the following states: new, running, waiting, ready, or terminated.

Process Control Block (PCB)

  • A PCB is a data structure that represents a process in the operating system.
  • It contains information such as process state, program counter, CPU registers, CPU-scheduling information, memory-management information, accounting information, and I/O status information.

Process States

  • New: the process is being created.
  • Running: instructions are being executed.
  • Waiting (Blocked): the process is waiting for some event to occur.
  • Ready: the process is waiting to be assigned to a processor.
  • Terminated: the process has finished execution.

Schedulers

  • Schedulers: a process migrates among the various scheduling queues throughout its lifetime.
  • Long-term scheduler: determines which processes enter the ready queue from the job pool.
  • Medium-term scheduler: swaps processes in and out of memory to optimize CPU usage and manage memory allocation.
  • Short-term scheduler: selects from among the processes that are ready to execute and allocates the CPU to one of them.

CPU Scheduling

  • CPU scheduling is the process of determining which process in the ready queue is allocated to the CPU.
  • Various scheduling algorithms can be used, such as First-Come-First-Served (FCFS), Shortest Job Next (SJN), Priority, and Round Robin (RR).
  • Different algorithms support different classes of processes and favor different scheduling criteria.

Context Switch

  • Switching the CPU to another process requires performing a state save of the current process and a state restore of a different process.
  • This task is known as a context switch.

CPU Scheduling Algorithms

  • Non-Preemptive: once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU willingly.
  • Preemptive: a process will leave the CPU willingly or it can be forced out.

Scheduling Criteria

  • CPU utilization: keeping the CPU as busy as possible.
  • Throughput: the number of processes that are completed per time unit.
  • Waiting time: the sum of the periods spent waiting in the ready queue.
  • Response time: the time it takes to start responding.

First-Come-First-Served (FCFS) Scheduling

  • The process that requests the CPU first is allocated the CPU first.
  • Implementation is managed by a FIFO queue.
  • It is always non-preemptive in nature.

Advantages and Disadvantages of FCFS

  • Advantages: easy to understand, and can be used for background processes where execution is not urgent.
  • Disadvantages: suffers from convoy effect, which means smaller processes have to wait for larger processes, resulting in large average waiting times.### CPU Scheduling Algorithms
  • FCFS (First-Come-First-Served) algorithm:
    • Non-preemptive algorithm
    • Process with the highest arrival time gets the CPU first
    • High average waiting time and Turnaround Time (TAT) compared to other algorithms
  • SJF (Shortest Job First) algorithm:
    • Non-preemptive and preemptive versions
    • CPU is assigned to the process with the smallest burst time
    • In non-preemptive version, once a process is scheduled, it cannot be preempted
    • In preemptive version, the process with the smallest burst time is executed first
    • Guarantees minimal average waiting time, making it an optimal algorithm
  • Priority Scheduling algorithm:
    • Each process is assigned a priority
    • CPU is allocated to the process with the highest priority
    • Non-preemptive and preemptive versions
    • In non-preemptive version, once a process is scheduled, it cannot be preempted
    • In preemptive version, the process with the highest priority is executed first
    • Tie is broken using FCFS order
  • Round Robin (RR) algorithm:
    • Designed for time-sharing systems
    • Each process is allocated a fixed time slice (time quantum)
    • If a process completes within the time quantum, the next process is executed
    • If a process does not complete within the time quantum, it is preempted and added to the end of the ready queue
    • Each process must wait no longer than (n-1) x time quantum for its next time quantum
  • Multi-Level Queue (MLQ) Scheduling algorithm:
    • Partitions the ready queue into several separate queues
    • Each queue has its own scheduling algorithm
    • Processes are permanently assigned to one queue based on their properties and requirements
    • Scheduling among the queues is commonly implemented as fixed-priority preemptive scheduling or round robin with different time quantum
  • Multi-Level Feedback Queue (MLFQ) Scheduling algorithm:
    • Allows processes to move between queues
    • If a process uses too much CPU time, it is moved to a lower-priority queue
    • If a process waits too long in a lower-priority queue, it may be moved to a higher-priority queue
    • Prevents starvation

Process Synchronization and Race Condition

  • Process Synchronization:
    • Multiple processes competing for limited resources
    • Concurrent access to shared data may result in data inconsistency
  • Race Condition:
    • A situation where the output of a process depends on the execution sequence of processes
    • If the order of execution of different processes is changed, the output may change
  • Critical Section Problem:
    • A section of code where a process accesses shared resources
    • Mutual Exclusion: No two processes should be present in the critical section at the same time
    • Progress: If no process is executing in its critical section, some processes may participate in deciding which process will enter the critical section next
    • Bounded Waiting: There exists a bound on the number of times a process is allowed to enter its critical section
  • Solutions to Critical Section Problem:
    • Two-Process Solution: Using Boolean variables, Boolean arrays, and Peterson's Solution
    • Operating System Solution: Using semaphores (counting semaphores and binary semaphores)
    • Hardware Solution: Using test and set lock, and disabling interrupts

Solutions to Critical Section Problem (Two-Process Solution)

  • Using Boolean variable turn:
    • Initialization: turn = 0 or 1 randomly
    • Process 0 and Process 1 execute the following code:
      • While (turn != 0); // Process 0
      • While (turn != 1); // Process 1
      • Critical Section
      • turn = 1; // Process 0
      • turn = 0; // Process 1
      • Remainder Section
    • Follows Mutual Exclusion, but suffers from strict alternation
  • Using Boolean array flag:
    • Initialization: flag = F
    • Process 0 and Process 1 execute the following code:
      • flag = T; // Process 0
      • flag = T; // Process 1
      • While (flag); // Process 0
      • While (flag); // Process 1
      • Critical Section
      • flag = F; // Process 0
      • flag = F; // Process 1
      • Remainder Section
    • Follows Mutual Exclusion, but may lead to deadlock
  • Dekker's algorithm:
    • Uses a combination of Boolean flags and turn variable
    • Ensures Mutual Exclusion, Progress, and Bounded Waiting
  • Peterson's solution:
    • Uses a combination of Boolean flags and turn variable
    • Ensures Mutual Exclusion, Progress, and Bounded Waiting
    • Classic Software-based solution to the critical-section problem for two processes

This quiz covers the fundamentals of operating systems, including classification, structure, and CPU scheduling. It is designed for a semester exam and tests knowledge of key concepts and terminology.

Make Your Own Quizzes and Flashcards

Convert your notes into interactive study material.

Get started for free

More Quizzes Like This

Process part 3
22 questions

Process part 3

RaptQuasimodo avatar
RaptQuasimodo
CPU Scheduling Algorithms Overview
10 questions
Use Quizgecko on...
Browser
Browser