Operating Systems Intro

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What is the primary role of an operating system (OS) in a computer system?

  • To provide hardware resources directly to users without a user interface.
  • To execute user programs exclusively without managing resources.
  • To directly manipulate hardware components for applications.
  • To act as an intermediary between users and computer hardware. (correct)

Which component is NOT a fundamental part of the computer system structure?

  • Hardware
  • Applications
  • Network Router (correct)
  • Operating System

What is the primary function of the bootstrap program during computer startup?

  • Providing a user interface for application execution.
  • Loading the operating system kernel after system initialization. (correct)
  • Managing user accounts and permissions.
  • Allocating system resources to running applications.

How does Direct Memory Access (DMA) contribute to system efficiency?

<p>By reducing the overhead associated with frequent interrupts. (D)</p> Signup and view all the answers

What is the key benefit of using APIs for accessing system calls?

<p>Abstraction, interoperability, and efficient access to system calls. (A)</p> Signup and view all the answers

Which method is NOT a typical way of passing parameters during a system call?

<p>Through direct hardware access (D)</p> Signup and view all the answers

What is the primary purpose of CPU privilege modes (user mode vs. kernel mode) in system call security?

<p>To enhance hardware security through privileged instructions. (B)</p> Signup and view all the answers

What is a key characteristic of a microkernel OS design approach?

<p>Having a minimal kernel with most services running in user space. (C)</p> Signup and view all the answers

Which of the following is NOT a typical process state?

<p>Compiling (C)</p> Signup and view all the answers

What type of information is typically stored in a Process Control Block (PCB)?

<p>Process state, program counter, and CPU registers. (B)</p> Signup and view all the answers

Which type of scheduler controls the degree of multiprogramming?

<p>Long-term scheduler (A)</p> Signup and view all the answers

Which system call creates a new process?

<p>fork() (B)</p> Signup and view all the answers

What is a key advantage of multithreading?

<p>Cheaper resource sharing compared to process creation. (A)</p> Signup and view all the answers

Which model maps many user threads to one kernel thread?

<p>Many-to-One (B)</p> Signup and view all the answers

Which of the following is a primary goal of CPU scheduling?

<p>Keeping the CPU busy to maximize utilization. (A)</p> Signup and view all the answers

Which scheduling algorithm is non-preemptive and simple to implement but can lead to the convoy effect?

<p>First-Come, First-Served (FCFS) (D)</p> Signup and view all the answers

What is the main requirement for addressing the Critical Section Problem?

<p>Ensuring mutual exclusion, progress, and bounded waiting. (A)</p> Signup and view all the answers

What is the purpose of mutex locks?

<p>To provide mutual exclusion. (A)</p> Signup and view all the answers

What is a characteristic of external fragmentation in memory management?

<p>Total available space is not contiguous. (D)</p> Signup and view all the answers

What is the cause of thrashing in virtual memory?

<p>Size of locality is greater than total memory size. (C)</p> Signup and view all the answers

Flashcards

Operating System (OS)

A program acting as an intermediary between users and computer hardware.

Computer System Components

Hardware, OS, Applications, Users. Applications cannot directly manipulate hardware.

Direct Memory Access (DMA)

Lowers overhead from interrupts, enabling direct data transfer between memory and I/O devices.

System Call

A programming interface to access OS services, often written in C/C++.

Signup and view all the flashcards

CPU Privilege Modes

User mode has limited access, while kernel mode has full access to hardware.

Signup and view all the flashcards

Microkernel

Minimal kernel with services in user space, offering flexibility.

Signup and view all the flashcards

Process

A program in execution; Includes code, data, stack, heap, and PCB.

Signup and view all the flashcards

Process States

New, Ready, Running, Waiting, Terminated.

Signup and view all the flashcards

Long-Term Scheduler

Controls degree of multiprogramming.

Signup and view all the flashcards

Process Creation

Parent creates children, which can share all, some, or no resources.

Signup and view all the flashcards

fork()

Creates a new process (POSIX).

Signup and view all the flashcards

Thread

Lightweight process within a process; shares resources.

Signup and view all the flashcards

Many-to-One Thread Model

Many user threads mapped to one kernel thread.

Signup and view all the flashcards

One-to-One Thread Model

Each user thread is mapped to a unique kernel thread.

Signup and view all the flashcards

CPU Scheduling Criteria

Keep CPU busy, Maximize processes completed, Minimize turnaround/waiting time.

Signup and view all the flashcards

First-Come, First-Served (FCFS)

simplest, non-preemptive scheduling. Can lead to convoy effect.

Signup and view all the flashcards

Round Robin (RR)

Time quantum-based preemptive scheduling.

Signup and view all the flashcards

Race Conditions

Conditions where multiple processes access/manipulate shared data concurrently. Results depend on execution order.

Signup and view all the flashcards

Critical Section

Segment of code accessing shared resources.

Signup and view all the flashcards

Memory Hierarchy

Registers, cache, main memory, secondary storage.

Signup and view all the flashcards

Study Notes

Topic 1: Introduction to Operating Systems

  • An OS is a program that acts as an intermediary between computer hardware and users.
  • Primary goals of an OS are to provide a user interface, create a friendly environment, execute user programs, and manage computer resources.
  • Computer systems consist of four components: hardware, the OS, applications, and users.
  • Applications do not directly manipulate hardware.
  • User-oriented OS services include user interface, program execution, I/O operations, file system manipulation, communication, and error detection.
  • System-oriented services include resource allocation, accounting, protection, and security.
  • The bootstrap program is stored in ROM/EPROM and loaded during power-up.
  • The bootstrap program initializes the system and loads the OS kernel.
  • The Master Boot Record (MBR) contains the partition table.
  • Interrupts notify the CPU about hardware events and invoke system calls.
  • The interrupt handling process involves saving context, executing the handler, and restoring context.
  • Interrupts enable efficient I/O operations.
  • Direct Memory Access (DMA) reduces overhead from frequent interrupts.
  • DMA allows direct data transfer between memory and I/O devices
  • DMA is useful for multiple I/O accesses using memory buffers.

Topic 2: System Calls

  • System calls provide a programming interface to OS services.
  • System Calls are typically written in high-level languages like C/C++.
  • They are accessed through an API or via direct calls.
  • Three common APIs include Win32 API (Windows), POSIX API (UNIX/Linux/Mac), and Java API.
  • Using APIs provides abstraction, interoperability, and efficient access to system calls.
  • Each system call has a number.
  • A system call interface maintains a table indexed by numbers.
  • The caller does not need to know implementation details of a system call.
  • Parameters can be passed through registers.
  • Parameters can also be passed through memory blocks with the address passed in a register.
  • Parameters can be passed through the stack (push/pop).
  • System call security is enforced through hardware security using privileged instructions.
  • CPU privilege modes include user mode and kernel mode.
  • The Current Privilege Level (CPL) is stored in system registers.
  • A layered OS design divides the OS into layers, each using only lower-level functions.
  • A microkernel OS design uses a minimal kernel with most services in user space.
  • A monolithic OS design has limited structuring and most functionality in the kernel.
  • A hybrid OS design combines a microkernel with extended functionality.

Topic 3: Process Management

  • A process is a program in execution.
  • Processes have components: program code, data, stack, heap, and a process control block (PCB).
  • Memory structure of a process includes stack, heap, data section, text section, and BSS.
  • Process states: new, ready, running, waiting, and terminated.
  • New is when the process is being created.
  • Ready is when the process is waiting for CPU allocation.
  • Running is when instructions are being executed.
  • Waiting is when the process is waiting for an event to occur.
  • Terminated is when execution is finished.
  • Process Control Block (PCB) contains process state, program counter, CPU registers, CPU scheduling information, memory management information, I/O status information, and accounting information.
  • The long-term scheduler controls the degree of multiprogramming.
  • The short-term scheduler selects which process gets the CPU next.
  • The medium-term scheduler swaps processes between memory and disk.
  • Scheduling also uses ready queues and device queues.
  • A parent process creates children processes, forming a tree.
  • Resource sharing options for process creation are all resources, a subset of resources, or no sharing.
  • Execution options are concurrent execution or the parent waits.
  • Address space options for the child process are a duplicate of the parent or a new program loaded.
  • fork() creates a new process during system calls.
  • exec() replaces the process memory with a new program.
  • wait() synchronizes the parent with its children.
  • exit() notifies the parent that the process has finished.
  • Process termination can occur via normal completion.
  • Other termination reasons: parent termination (cascading termination), resource limits exceeded, or task no longer needed.

Topic 4: Threads

  • A thread represents a lightweight process within a process.
  • Threads have components: thread ID, program counter, register set, and stack.
  • Threads share resources: code section, data section, and files.
  • Multithreading offers advantages like being cheaper than process creation (economy).
  • Shared memory and resources is another advantage of multithreading.
  • It provides responsiveness by allowing continued execution if part is blocked.
  • It provides scalability and can use multiprocessor architectures.
  • In the Many-to-One thread model, multiple user threads are mapped to one kernel thread.
  • In the One-to-One thread model, each user thread maps to a kernel thread.
  • The Many-to-Many thread model maps many user threads to many kernel threads.
  • The Two-level thread model is a combination with bound threads.
  • POSIX Pthreads provides functions such as: pthread_create, pthread_exit, pthread_join, and pthread_detach.
  • Thread creation parameters include attributes and the execution function.
  • Threads share the same address space, while processes have separate address spaces.
  • Thread creation is lightweight, while process creation is heavyweight.
  • Context switching is faster for threads compared to processes.
  • The level of isolation for threads is less than it is for processes.
  • Thread-related issues concern signal handling, thread cancellation (asynchronous vs. deferred), thread-local storage, and thread safety levels (thread safe, conditionally safe, unsafe).

Topic 5: CPU Scheduling

  • CPU utilization aims to keep the CPU busy.
  • Throughput measures the number of processes completed per time unit.
  • Turnaround time is the time from submission to completion.
  • Waiting time is the time spent in the ready queue.
  • Response time is the time from submission to the first response.
  • First-Come, First-Served (FCFS) is the simplest scheduling algorithm and is non-preemptive.
  • Shortest-Job-First (SJF) is optimal for minimizing waiting time.
  • Priority Scheduling assigns CPU time based on priority number.
  • Round Robin (RR) is a time quantum-based preemptive scheduling algorithm.
  • Multilevel Queue uses multiple queues for different process types.
  • Multilevel Feedback Queue allows processes to move between queues.
  • FCFS is simple but can lead to the convoy effect.
  • SJF is optimal but requires predicting burst times.
  • Priority scheduling can lead to starvation of low-priority processes.
  • RR is good for time-sharing systems because it provides fair allocation.
  • The Linux Completely Fair Scheduler (CFS) uses "nice" values to determine process priority.
  • In CFS, virtual runtime (vruntime) tracks execution time.
  • Processes with lower vruntime get scheduled first.
  • In CFS, time slice is proportional to process weight.
  • Soft real-time scheduling provides no guarantee when a critical process will be scheduled.
  • Hard real-time scheduling requires that a task must be serviced by its deadline.
  • Rate Monotonic Scheduling assigns priority based on period length.

Topic 6: Process Synchronization

  • A critical section is a segment of code accessing shared resources.
  • Requirements for a solution to the critical section problem: mutual exclusion, progress, and bounded waiting.
  • Atomic instructions, such as test-and-set and compare-and-swap, provide hardware support for synchronization.
  • The test-and-set instruction atomically reads and sets a boolean value.
  • Mutex locks provide mutual exclusion.
  • Semaphores can be counting or binary.
  • Monitors are a high-level synchronization construct.
  • The Producer-Consumer Problem involves a shared buffer between producer and consumer processes.
  • A bounded buffer has a fixed size.
  • An unbounded buffer has no practical limit.
  • Synchronization is required to prevent race conditions in the Producer-Consumer Problem.
  • Race conditions occur when multiple processes access and manipulate shared data concurrently.
  • In race conditions, the final value depends on the order of execution.
  • Race conditions can lead to data inconsistency.

Topic 7: Memory Management

  • The memory hierarchy consists of registers, cache, main memory, and secondary storage.
  • Address binding can occur at compile time, load time, or execution time.
  • Contiguous allocation assigns each process one continuous section of memory.
  • Fixed partitioning divides memory into fixed-sized partitions.
  • Variable partitioning creates partitions of variable size.
  • External fragmentation means that enough total space is available but is not contiguous.
  • Internal fragmentation means that allocated space is larger than requested.
  • Paging divides physical memory into fixed-sized frames.
  • Paging divides logical memory into pages of the same size.
  • A page table maps logical pages to physical frames.
  • Address translation in paging involves the page number and offset.
  • Hierarchical page tables use multiple levels of page tables.
  • Inverted page tables have one entry per physical frame.

Topic 8: Main Memory

  • The CPU can only directly access registers and main memory.
  • Programs must be brought from disk into memory before execution.
  • Memory protection is required for correct operation.
  • Addresses can be bound at compile time, load time, or execution time.
  • A logical address is different than a physical address space.
  • The Memory Management Unit (MMU) is used for address translation.
  • Contiguous memory allocation involves partitioning memory for OS and user processes.
  • Contiguous allocation can use fixed and variable partition schemes.
  • External and internal fragmentation are issues in contiguous memory allocation.
  • Paging divides physical memory into fixed-sized frames.
  • Paging divides logical memory into pages of the same size.
  • A page table is used for address translation with paging.
  • Page tables can be structured hierarchically, hashed, or inverted.
  • A Translation Look-aside Buffer (TLB) is a fast-lookup hardware cache for page table entries.
  • A TLB improves effective memory access time.

Topic 9: Virtual Memory

  • Virtual memory separates user logical memory from physical memory.
  • Virtual memory allows running programs larger than physical memory.
  • Demand paging brings pages into memory only when needed.
  • Page fault handling is a process in demand paging.
  • Page faults can have a performance impact.
  • Page replacement algorithms include FIFO, Optimal, and LRU.
  • Frame allocation strategies determine how frames are allocated to processes.
  • Page replacement can be global or local.
  • Thrashing can occur when the size of locality is greater than the total memory size.
  • The Working Set Model is used to manage thrashing.
  • Page Fault Frequency (PFF) is another method for managing thrashing.
  • Other memory considerations include what steps to take when prepaging, page size selection, TLB reach, program structure impact on performance, and I/O interlock.

Topic 10: Deadlocks

  • In the system model, there are resource types and instances.
  • The process resource utilization cycle involves request, use, and release.
  • Four necessary conditions for deadlock are mutual exclusion, hold and wait, no preemption, and circular wait.
  • Methods for handling deadlocks include prevention, avoidance, detection and recovery, and ignoring the problem.
  • Deadlock prevention involves invalidating one of the four necessary conditions.
  • Deadlock avoidance algorithms include the Resource-Allocation Graph algorithm and Banker's Algorithm.
  • For deadlock detection, a wait-for graph can be used for a single instance.
  • A detection algorithm can be used for multiple instances.
  • Recovery from deadlock can be achieved through process termination and resource preemption.

Tutorial 1: Computer Booting Process

  • The bootstrapping program is a small program in ROM/EPROM that loads the OS.
  • A boot loader executes code in the Master Boot Record (MBR).
  • The MBR contains a partition table with disk partition information.
  • OS initialization loads data structures, device drivers, and modules.
  • Interrupts notify the CPU about hardware events and invoke system calls.
  • The CPU interrupt handling process checks if interrupts are enabled and pending, saves the current context, executes the interrupt routine, and handles higher priority interrupts first.
  • Without interrupts, I/O operation waits until completion (active waiting).
  • With interrupts, I/O operation initiates and returns immediately.
  • Direct Memory Access (DMA) reduces overhead from frequent interrupts.
  • Direct Memory Access (DMA) allows direct data transfer between memory and I/O devices.
  • Direct Memory Access (DMA) is programmed for multiple I/O accesses using memory buffers.
  • Dual user-kernel space controlled by the CPU bit is a hardware protection mechanism.
  • Privileged instructions are only executable in kernel space.
  • System call process uses CPL changes.

Tutorial 2: Processes and Threads

  • Process states include: new, ready, running, waiting/blocked, medium-term scheduling (swapping), ready swapped, terminated.
  • Key actions at each transition include resource allocation, CPU allocation, I/O operations, swapping, and resource reclamation.
  • CPU-bound processes spend most of their time performing calculations.
  • I/O-bound processes spend significant time waiting for I/O operations, such as database servers.
  • Scheduling priorities have an impact on different process types.
  • Threads are created using pthread_create().
  • Threads are executed via a specified function.
  • Threads are terminated using pthread_exit().
  • Threads are synchronized with pthread_join().
  • In multithreading in web servers, the main process initializes and manages high-level operations.
  • Worker processes handle incoming connections.
  • Thread pools manage multiple worker threads.
  • Request handling threads process individual client requests.
  • Multithreading provides concurrent processing, resource sharing, and responsiveness.
  • Process hierarchies and parent-child relationships are created using fork().
  • Linux clone system call is a low-level function for creating processes or threads.
  • Clone offers more flexibility but is Linux-specific, whereas pthread_create is more portable but less customizable.
  • Challenges when using clone include flag selection, error handling, and platform limitations.

Tutorial 3: Scheduling and Synchronization

  • Round Robin scheduling's impact of quantum size refers to turnaround time, waiting time, and response time.
  • Scheduling metrics are analyzed for different quantum values.
  • Spikes occur in performance metrics when scheduling.
  • Preemptive scheduling has advantages like improved responsiveness, fair CPU sharing, and dynamic adaptation.
  • Preemptive scheduling has disadvantages like increased overhead, synchronization complexity, and unpredictability.
  • Non-preemptive scheduling has advantages like lower overhead, simplified synchronization, and deterministic behavior.
  • Non-preemptive scheduling has disadvantages like poor responsiveness, risk of CPU monopolization, and limited adaptability.
  • Each approach has use cases that are more appropriate than others.
  • In Exponential Averaging for CPU Burst Prediction, the formula is: τ ₊₁ = α·t + (1-α)·τ.
  • Understanding how different α values affect prediction behavior is key in Exponential Averaging.
  • Analyzing prediction patterns for various CPU burst sequences will result in improved performance.
  • Semaphores are used for thread coordination.
  • Semaphores implement strict alternating execution patterns.
  • Semaphores ensure sequential execution order across multiple threads.
  • Semaphores protect critical sections from race conditions.
  • It's important to indetify potential race conditions in concurrent file access.
  • Consequences of race conditions include an inconsistent state, data corruption, and performance issues.
  • Semaphores can be used to fix race conditions.
  • Utilize binary semaphores versus counting semaphores appropriately.

Tutorial 4: File Systems and Memory Management

  • File systems can support many file structure types versus a stream of bytes.
  • Advantages of structured approaches: system-level support, efficiency.
  • Increased system size and limited application flexibility can be disadvantegous.
  • A byte stream creates simplicity and application flexibility.
  • The trade off is that the application must define file structures.
  • Contiguous allocation is good for sequential and random access.
  • Linked allocation is good for sequential access, but poor for random access.
  • Indexed allocation is good for both sequential and random access.
  • Performance must be considered for each method.
  • Logical address is an abstract address from the user/process point of view, and is generated by the CPU.
  • Physical address is the actual memory location that is generated by the MMU.
  • There must be a translation process between the two addresses.
  • Page sizes are powers of 2 for efficient address splitting when performing paging implementation.
  • Page table entry calculations are essential for paging implementation.
  • One must calcuate the logical page number and offset from addresses when paging.
  • External fragmentation occurs when there is enough total space but not contiguous.
  • Internal fragmentation occurs when the space allocate is larger than requested.
  • Fragmentation occurs in different memory allocation methods.
  • Hierarchical page tables are indexed by the logical page number, and there is one table per process.
  • Inverted page tables are indexed by the physical frame number, with one table for all processes.

Tutorial 5: Virtual Memory

  • Page faults occur when accessing pages not in main memory.
  • The OS takes actions to handle page faults.
  • The page fault lower bound is the number of distinct pages.
  • The page fault upper bound is the length of the reference string.
  • LRU (Least Recently Used) replaces pages not used for the longest time.
  • FIFO (First-In-First-Out) replaces the oldest page in memory.
  • Optimal replacement replaces pages not needed for the longest time.
  • Detailed execution steps exist for each of the algorithms.
  • One performs a comparative analysis of page fault rates.
  • Belady's Anomaly is a phenomenon where adding more frames causes more page faults.
  • Belady's Anomaly occurs in FIFO but not in LRU or Optimal replacement.
  • This anomaly is demonstrated with reference string examples.
  • There are performance implications in memory management.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Operating System Services Quiz
76 questions
Operating Systems Services Overview
40 questions
Operating System Services and System Calls
46 questions
Operating System Role and Services
48 questions
Use Quizgecko on...
Browser
Browser