Midterm Study Guide 351 PDF

Summary

This document is a study guide for a midterm exam on operating systems. It covers basic concepts such as processes, threads, scheduling, and synchronization. The guide includes material from different sections spanning from Introduction to Operating Systems to Process Synchronization.

Full Transcript

Week 1: Introduction to Operating Systems Key Concepts: - Operating System (OS): A program that manages hardware resources and is an intermediary between the user and hardware. - Functions of an OS: - Process Management: Manages the CPU, memory, files, and I/O devices for executing p...

Week 1: Introduction to Operating Systems Key Concepts: - Operating System (OS): A program that manages hardware resources and is an intermediary between the user and hardware. - Functions of an OS: - Process Management: Manages the CPU, memory, files, and I/O devices for executing processes. - Memory Management: Tracks what parts of memory are in use and manages memory allocation. - Storage Management: Manages the storage devices and file systems. - I/O Management: Provides interfaces for various input/output devices. - Security and Protection: Controls access to system resources and protects from internal/external threats. - System Calls: Functions provided by the OS to allow processes to request services (e.g., file operations). Week 2: Process Management Processes: - Process: A program in execution. Requires resources like CPU, memory, and I/O devices. - Process States: - New: Process is being created. - Running: Executing instructions. - Waiting: Waiting for an event (e.g., I/O). - Ready: Ready to be assigned to a processor. - Terminated: Finished execution. Process Scheduling: - Short-term Scheduler: Decides which process in memory gets the CPU. - Long-term Scheduler: Selects processes to load into memory. - Medium-term Scheduler: Swaps processes in and out of memory to optimize CPU usage. Process Control Block: - A data structure that contains process state, ID, scheduling info, memory limits, etc., used to manage each process. Week 2: Inter-process Communication - Shared Memory: Fast but requires synchronization. - Message Passing: Easier but slower than shared memory. Used when processes need to exchange small amounts of data. Process Creation in Unix/Linux: - fork(): Creates a child process. The child is a clone of the parent. - exec(): Replaces a process's program with a new program. - wait(): The parent process waits for the child to terminate. - exit(): Terminates the process. Important Concepts to Review: - Difference between a program and a process: A program is a set of instructions, while a process is an executing program. - Scheduling mechanisms and how the OS manages CPU and memory among competing processes. - System calls and process control in Unix/Linux environments. Week 4: Introduction to Threads What is a Thread? - A thread is the basic unit of CPU utilization, consisting of: - Program counter (keeps track of instructions) - Register values (stores current working variables) - Stack (execution history) - Multithreading: Multiple threads within a process run concurrently, sharing the same resources but having separate execution paths. - Benefits: - Responsiveness: The program can continue running even if the part is blocked. - Resource Sharing: Threads share memory/resources, reducing overhead. - Economy: Threads are cheaper to create/manage compared to processes. - Scalability: Threads can run in parallel on multicore systems. Types of Threads: - User Threads: Managed at the user level without kernel involvement. - Kernel Threads: These are managed directly by the operating system. Multithreading Models: - Many-to-One: Multiple user threads map to one kernel thread. - One-to-One: Each user thread maps to a separate kernel thread (e.g., Windows/Linux). - Many-to-Many: Multiple user threads map to multiple kernel threads. Week 5: Process Synchronization Process Synchronization - Goal: Ensure that concurrent processes accessing shared data do not cause data inconsistency or race conditions. - Race Condition: Occurs when multiple processes manipulate shared data concurrently, and the result depends on the execution order. Critical Section Problem: - Critical Section: A part of a process where shared resources are modified. - Solution Requirements: - Mutual Exclusion: Only one process may execute in its critical section at a time. - Progress: Processes not in their critical section should allow others to enter theirs. - Bounded Waiting: A process should enter its critical section on time. Synchronization Tools: 1. Semaphores: - Binary Semaphore (Mutex): Allows one process at a time. - Counting Semaphore: Manages access to multiple resources. 2. Pthreads Mutexes: Provides mutual exclusion for critical sections in multithreaded programs. 3. Deadlock: Occurs when multiple processes wait indefinitely for each other to release resources. Key Concepts Summary Threads: - Threads improve performance by enabling parallel execution. - Multithreading models determine how threads are managed between user and kernel space. Scheduling: - Different algorithms(FCFS, SJF, Priority, RR) are used to optimize CPU usage. - Preemptive vs. Non-preemptive scheduling affects process efficiency and responsiveness. Process Synchronization: - Protects shared resources from concurrent access issues (e.g., race conditions). - Semaphores and mutexes are key mechanisms for ensuring synchronization in critical sections. Week 7: CPU Scheduling CPU Scheduling - Goal: Maximize CPU utilization by scheduling processes effectively. - CPU-I/O Burst Cycle: Processes alternate between CPU and I/O bursts. The CPU scheduler selects processes to execute during idle CPU times. Scheduling Types: - Preemptive Scheduling: Allows a process to be interrupted for another to execute (e.g., modern OS). - Non-preemptive Scheduling: A process that keeps the CPU until it finishes (e.g., older systems). CPU Scheduling Algorithms: 1. First-Come, First-Served (FCFS): Processes are served in the order they arrive. - Advantage: Simple to implement. - Disadvantage: Causes the convoy effect, where short processes wait behind long ones. 2. Shortest-Job-First (SJF): The process with the shortest CPU burst gets executed next. - Optimal for minimum average waiting time, but hard to predict burst length. 3. Priority Scheduling: CPU is assigned to the process with the highest priority. - Problem: Starvation—low-priority processes may never execute. Solution: Aging, which gradually increases the priority of waiting processes. 4. Round Robin (RR): Each process is assigned a time quantum, and processes are executed in circular order. - Turnaround Time and Context Switches depend on the quantum size. Pseudocode Readers-writers.c function reader(reader_id): loop 5 times: // Entry Section wait(mutex) // Lock to modify read_count read_count += 1 // Increase number of active readers if read_count == 1: // If it's the first reader wait(rw_mutex) // Lock the writer signal(mutex) // Release the lock // Critical Section (Reading) print("Reader", reader_id, "is reading") // Exit Section wait(mutex) // Lock to modify read_count read_count -= 1 // Decrease the number of active readers if read_count == 0: // If it's the last reader signal(rw_mutex) // Unlock the writer signal(mutex) // Release the lock sleep(1) // Simulate work outside of critical section function writer(writer_id): loop 5 times: // Entry Section wait(rw_mutex) // Only one writer can access the resource at a time // Critical Section (Writing) print("Writer", writer_id, "is writing") // Exit Section signal(rw_mutex) // Writer releases the resource for others sleep(1) // Simulate work outside of critical section Main initialize rw_mutex to 1 // Allows one writer at a time initialize mutex to 1 // Controls access to read_count initialize reader_numbers and writer_numbers with values 1 to 5 for each reader i from 1 to 5: create thread for reader with reader_numbers[i] for each writer i from 1 to 5: create thread for writer with writer_numbers[i] for each reader i from 1 to 5: wait for reader thread to finish for each writer i from 1 to 5: wait for writer thread to finish destroy rw_mutex and mutex Reader-Writer Problem Study Notes​(Week1 - Introduction an…)​(Week 2 - processes (1)) Purpose: Illustrates synchronization between multiple readers and writers who want to access a shared resource. Key Elements: ○ Semaphores: Used to control access to shared resources. rw_mutex: Ensures that either one writer or multiple readers can access the resource. Mutex: Controls access to the read_count variable, ensuring mutual exclusion when updating the count of active readers. ○ Readers: Multiple readers can read the resource simultaneously. ○ Writers: Only one writer can write at any time, and no readers can access the resource while a writer writes. This pseudocode can help clarify how concurrency and synchronization work in operating systems with multiple threads. It fits into the broader topics of process management and inter-process communication. Dining Philosopher's Problem with Semaphores function philosopher(philosopher_id): loop 5 times: // Thinking print("Philosopher", philosopher_id, "is thinking") sleep(1) // Simulate thinking time // Hungry and trying to pick up chopsticks print("Philosopher", philosopher_id, "is hungry and trying to pick up chopsticks") // Pick up the left chopstick (semaphore wait) wait(chopstick[philosopher_id]) // Pick up the right chopstick (semaphore wait) wait(chopstick[(philosopher_id + 1) % N]) // Eating print("Philosopher", philosopher_id, "is eating") sleep(1) // Simulate eating time // Put down chopsticks signal(chopstick[philosopher_id]) // Put down the left chopstick signal(chopstick[(philosopher_id + 1) % N]) // Put down the right chopstick print("Philosopher", philosopher_id, "has finished eating") Main initialize array of semaphores chopstick[N] // Represents the chopsticks // Initialize each chopstick semaphore to 1 (available) for each i from 0 to N-1: initialize chopstick[i] to 1 // Create philosopher threads for each i from 0 to N-1: assign philosopher_numbers[i] = i create thread for philosopher with philosopher_numbers[i] // Join philosopher threads for each i from 0 to N-1: wait for philosopher thread to finish // Destroy semaphores after all threads are done for each i from 0 to N-1: destroy chopstick[i] Dining Philosophers Problem Study Notes​(Week1 - Introduction an…)​(week 2 - processes (2)) Problem Overview: ○ Five philosophers sit at a table with a bowl of rice and five chopsticks. Philosophers alternate between thinking and eating. To eat, each philosopher must pick up two chopsticks: one on the left and one on the right. The challenge is to avoid deadlock (where no philosopher can eat because they're waiting for chopsticks) and starvation (where a philosopher never gets to eat). Semaphores: ○ Used to represent the availability of chopsticks. ○ sem_wait: Represents picking up a chopstick (locks it so others can't use it). ○ sem_post: Represents putting down a chopstick (releases it for others to use). Critical Concepts: ○ Deadlock Avoidance: Philosophers must avoid a situation where each philosopher picks up one chopstick and waits indefinitely for the other. ○ Concurrency: This problem illustrates the need for proper synchronization in concurrent systems, as philosophers (threads) compete for shared resources (chopsticks).

Use Quizgecko on...
Browser
Browser