OS REVIEWER.pdf

Full Transcript

1. OS History and Overview Primary Role: Manage computer hardware & software. ENIAC: Early computer from the 1940s. GM-NAA I/O: OS for IBM 709 in the 1950s. CTSS (1960s): First time-sharing OS developed at MIT. Unix (1970s): Introduced multitasking and multi-user capabiliti...

1. OS History and Overview Primary Role: Manage computer hardware & software. ENIAC: Early computer from the 1940s. GM-NAA I/O: OS for IBM 709 in the 1950s. CTSS (1960s): First time-sharing OS developed at MIT. Unix (1970s): Introduced multitasking and multi-user capabilities, foundation for modern OS. Windows (1980s): GUI-based OS for general users. Linux (1990s): Open-source OS, widely used in servers and embedded systems. Android & iOS (2000s): Dominant mobile operating systems. 2. Scheduling Algorithms First-Come, First-Served (FCFS): Executes processes in the order they arrive. Shortest Job First (SJF): Executes processes with the shortest burst time first. Round Robin (RR): Each process gets a fixed time quantum to run. Priority Scheduling: High-priority processes can starve lower-priority ones. Multilevel Feedback Queue: Multiple queues with different priority levels, balances between long and short processes. Real-Time Scheduling: Rate-Monotonic and Earliest Deadline First (EDF) for time-critical tasks. Fair Share Scheduling: Ensures equal resource distribution across user groups. Priority Inversion: Lower-priority process holds a resource needed by higher-priority process; resolved by Priority Inheritance. 3. Deadlock Definition: Processes wait indefinitely for resources held by each other. Conditions for Deadlock: Mutual Exclusion, Hold and Wait, No Preemption, Circular Wait (Coffman Conditions). - Conditions for Deadlock (All 4 must hold for a deadlock to occur): 1. Mutual Exclusion: At least one resource must be held in a non-sharable mode. 2. Hold and Wait: A process holding one resource is waiting for additional resources held by other processes. 3. No Preemption: Resources cannot be forcibly removed from the processes holding them. 4. Circular Wait: A set of processes must be waiting for each other in a circular chain. Deadlock Detection: Checking for cycles in RAG. Recovery: ○ Abort one process or more. ○ Resource Preemption: Take resources from some processes and reallocate. Performance Tradeoff: Deadlock prevention is computationally expensive. 4. Cache Purpose: Reduce time to access frequently used data. Cache Levels: ○ L1: Smallest, closest, fastest cache. ○ L2: Larger and slower than L1. ○ L3: Shared among CPU cores, largest and slowest. Cache Associativity: ○ Direct-Mapped ○ Fully Associative ○ Set-Associative Cache Miss Types: ○ Compulsory Miss: Data accessed for the first time. ○ Capacity Miss: Cache cannot hold all data. ○ Conflict Miss: Eviction due to associativity rules. Replacement Policy: Least Recently Used (LRU). Cache Coherence: MSEI Protocol keeps cache data consistent in multi-core systems. Write-Back vs Write-Through: ○ Write-Back: Writes data to cache first, then to memory later. ○ Write-Through: Data written to cache and memory simultaneously. 5. Time Concepts in Scheduling Arrival Time: When a process arrives in the ready queue. Burst Time: Total CPU time required by a process. Completion Time: When a process finishes execution. Turnaround Time: Time from process arrival to completion (Completion Time - Arrival Time). Waiting Time: Time spent waiting in the ready queue (Turnaround Time - Burst Time). Response Time: Time from arrival to the first response; important in interactive systems. CPU Utilization: Percentage of time the CPU is active. Throughput: Number of processes completed per unit time. Context Switch: Overhead caused by switching between processes. 6. Graphs (Vertices and Edges) Vertices (nodes): Represent entities or points in the graph. vertices represent entities or points, such as processes, resources, or data points, that are connected by edges. Edges (Arcs): Connections between vertices in a graph, representing relationships or interactions between those entities. Edges can be directed (one-way) or undirected (two-way), and in some cases, may have weights or costs associated with them. Resource Allocation Graph (RAG): Used for deadlock detection, with processes and resources as vertices. Cycle in RAG: Indicates a potential or actual deadlock. Weighted Directed Graph: Each edge has a cost or weight, used in shortest path algorithms (e.g., Dijkstra's Algorithm). Directed Acyclic Graph (DAG): Used in task scheduling; no cycles. Depth-First Search (DFS): Used to detect cycles in graphs. Topological Sorting: Used in DAG to determine the sequence of tasks. Multigraph: Allows multiple edges between the same pair of vertices. 7. Process Synchronization Race Condition: Unpredictable behavior due to unsynchronized access to shared data. Semaphore: A counter used to control access to a shared resource. Monitors: High-level synchronization construct that encapsulates shared resources. Critical Section Problem: Ensures that only one process enters the critical section at a time. Readers-Writers Problem: Ensures multiple readers can access shared data, but writers need exclusive access. Spinlocks: Simple lock where a process waits (spins) for the lock. Barriers: Ensure all processes reach a certain point before any proceed. Process Synchronization: It ensures that multiple processes or threads can safely access shared resources without conflicts or data corruption. It prevents issues like race conditions by controlling the execution order of processes, often using tools like semaphores or locks to manage access to critical sections. 8. Memory Management Paging: Divides memory into fixed-size blocks to avoid fragmentation. Segmentation: Divides memory into segments for logical units like functions or arrays. Virtual Memory: Uses disk space to simulate larger memory. Memory Protection: Prevents one process from accessing another’s memory. Fragmentation: ○ Internal Fragmentation: Wasted space inside allocated memory blocks. ○ External Fragmentation: Gaps between allocated blocks due to deallocation. 10. Additional Concepts Preemptive Scheduling: CPU can be taken from a process before it completes. Non-Preemptive Scheduling: A process retains the CPU until it finishes. Throughput: Defined as the total number of processes completed in a given time frame.

Use Quizgecko on...
Browser
Browser