Process & Memory Management PDF
Document Details
Uploaded by NoiselessPulsar2474
Tags
Summary
This document provides an overview of process and memory management in operating systems. It covers topics such as process scheduling, memory hierarchy, and various allocation strategies. The document emphasizes different scheduling algorithms, focusing on their advantages and disadvantages.
Full Transcript
Process management & Memory Management Process Scheduling: Definition: Process scheduling is the mechanism by which the operating system manages the execution of processes in a system. Objective: Efficiently utilize the CPU and ensure fair execution for all processes Why Process Scheduli...
Process management & Memory Management Process Scheduling: Definition: Process scheduling is the mechanism by which the operating system manages the execution of processes in a system. Objective: Efficiently utilize the CPU and ensure fair execution for all processes Why Process Scheduling Resource Allocation: The CPU is a finite resource, and effective scheduling ensures optimal utilization. Responsiveness: Users expect quick responses. Scheduling influences how responsive a system feels. Throughput: Maximizing the number of processes completed per unit time. Process statues Process statues 1. New: 1. This is the initial state when a process is first created. 2. In this state, the process is being set up and initialized. 2. Ready: 1. After initialization, the process moves to the ready state. 2. It is waiting to be assigned to a processor for execution. 3. Running: 1. The process moves to the running state when it is being executed on a CPU. 2. In a multiprogramming environment, multiple processes may be in the running state simultaneously on different processors. Process statues 4. Waiting/blocked : 1. A process enters the blocked state when it cannot proceed until some event occurs. 2. This event could be the completion of an I/O operation, the availability of a resource, etc. 5. Terminated: 1. The process moves to the terminated state when it has finished its execution. 2. At this point, the operating system releases the resources associated with the process. 6. Suspended (optional): 1. Some systems have an additional "suspended" state. 2. A process may be temporarily moved to this state to free up resources for other processes. CPU dispatcher Component of an operating system responsible for making decisions about the execution of processes on the CPU. it is responsible for deciding whether the currently running process should continue running and, if not, which process from ready queue should run next. CPU dispatcher 1. Process Selection: 1. The dispatcher selects the next process from the pool of ready processes to be run 2. Context Switching: 1. If the selected process is different from the one currently running on the CPU, the dispatcher performs a context switch. This involves saving the context (state) of the currently running process and loading the context of the new process. 3. CPU Allocation: 1. The dispatcher allocates the CPU to the selected process, allowing it to execute its instructions. CPU Scheduling Algorithms(Process Selection) First-Come, First-Served (FCFS) Shortest Job First (SJF) Round Robin (RR) Priority Scheduling Multilevel Queue Scheduling CPU Scheduling Algorithms First-Come, First-Served Possibly the most straightforward approach to scheduling processes is to maintain a FIFO (first-in, first-out) run queue. New processes go to the end of the queue. When the scheduler needs to run a process, it picks the process that is at the head of the queue Advantage: Simple and easy to understand. It is also intuitively fair Disadvantage: May suffer from the "convoy effect" where short processes get stuck behind long ones. CPU Scheduling Algorithms(Process Selection) Shortest Job First (SJF) Shortest job will be scheduled first. The biggest problem with sorting processes this way is that we’re trying to optimize our schedule using data that we don’t even have! We don’t know what the CPU burst time will be for a process when it’s next run Advantage: Minimize the total processing time. Disadvantages: May lead to "starvation" for longer processes. In most scenarios it is impossible to estimate the duration of process CPU Scheduling Algorithms(Process Selection) Round Robin (RR) Round robin scheduling is a preemptive version of first-come, first- served scheduling. Processes are dispatched in a first-in-first-out sequence, but each process is allowed to run for only a limited amount of time. This time interval is known as a time-slice or quantum. Advantage: Round robin scheduling is fair in that every process gets an equal share of the CPU Disadvantage: fairness depends on time slice size. Giving every process an equal share of the CPU is not always a good idea CPU Scheduling Algorithms(Process Selection) Priority Scheduling Each process is assigned a priority (just a number). Of all processes ready to run, the one with the highest priority gets to run next Advantage: priority scheduling provides a good mechanism where the relative importance of each process may be precisely defined Disadvantage: If high priority processes use up a lot of CPU time, lower priority processes may starve and be postponed indefinitely, leading to starvation CPU Scheduling Algorithms(Process Selection) Priority Scheduling Process aging: The scheduler keep track of low priority processes that do not get a chance to run and increase their priority so that eventually the priority will be high enough so that the processes will get scheduled to run. CPU Scheduling Algorithms(Process Selection) Multilevel Queue Scheduling It group processes into priority classes and assign a separate run queue for each class. Memory Management Memory Management Definition: Memory management is the process of controlling and coordinating computer memory, assigning portions known as blocks to various processes to optimize overall system performance. Importance: Efficient memory management is essential for multitasking, concurrent execution of processes, and resource utilization. Memory Hierarchy Memory Hierarchy Memory is organized in a hierarchy, ranging from high-speed, small- capacity registers and caches to larger, slower main memory (RAM), and even slower, larger storage devices (hard drives, SSDs). 1 Registers: Description: Registers are the fastest and smallest type of memory, located directly within the CPU. They store data that is currently being processed by the CPU. Characteristics: Extremely fast access times but limited in capacity. Memory Hierarchy 2- Cache Memory: Description: Cache memory is a small-sized type of volatile computer memory that provides high-speed data access to a processor and stores frequently used computer programs, applications, and data. Levels: Modern processors typically have multiple levels of cache (L1, L2, and sometimes L3). Characteristics: Faster than main memory, but more expensive. Helps bridge the speed gap between the fast CPU registers and slower main memory. Memory Hierarchy 3 - Main Memory (RAM - Random Access Memory): Description: RAM is used for storing data and machine code currently being used and processed by the CPU. It is volatile, meaning its contents are lost when the power is turned off. Characteristics: Larger in capacity than cache memory but slower in access speed. It's the bridge between the fast cache and slower storage devices. Memory Hierarchy 4 -Secondary Storage (Hard Drives, SSDs): Description: Secondary storage devices provide non-volatile, high- capacity storage for data and applications. Unlike RAM, data persists even when the power is turned off. Characteristics: Much slower access times compared to RAM but significantly larger in capacity. Used for long-term storage of programs and data. Memory Hierarchy Memory Access Time The time it takes to access data from a specific level of the memory hierarchy. Hierarchy Principle: Access times increase as you move down the memory hierarchy. Registers have the fastest access times, followed by cache, main memory, and secondary storage. Memory Hierarchy Cache Coherency: Ensuring that copies of data in different levels of the cache hierarchy are consistent and up-to-date. Cache Lines: Data is often transferred between levels of the cache hierarchy in fixed-size blocks called cache lines. Address Spaces Process Address Space: Each process in the system has its own address space, which is the range of valid addresses a process can use. These range is assigned/Allocated and managed by the operating system Kernel Address Space: The portion of memory reserved for the operating system kernel. Memory Allocation Strategies Fixed Partitioning: Memory is divided into fixed-size partitions, and each process is assigned a partition. Efficient use of space but can lead to fragmentation. Dynamic Partitioning: Memory is divided into variable-sized partitions to accommodate processes of different sizes. Requires dynamic allocation algorithms. Fragmentation Internal Fragmentation: Wasted memory within a partition due to allocating a larger block than necessary. External Fragmentation: Free memory is scattered in small, non- contiguous blocks, making it challenging to allocate large contiguous blocks. Memory Allocation Algorithms Memory allocation algorithms are techniques used by operating systems to assign and manage memory space for programs and processes. These algorithms aim to efficiently use the available memory while minimizing fragmentation and ensuring proper allocation and deallocation of memory resources There are several strategies for allocating free memory to processes upon request. First Fit, Best Fit, Worst Fit Memory Allocation Algorithms 1 First Fit Description: The first-fit algorithm allocates the first available memory block that is large enough to accommodate the requested size. Advantages: Simple and easy to implement. Disadvantages: Can lead to fragmentation, both internal (unused space within a block) and external (small free blocks scattered throughout). Memory Allocation Algorithms 2- Best Fit: Description: The best-fit algorithm allocates the smallest available block that is large enough to accommodate the requested size. Advantages: Reduces internal fragmentation by selecting the closest fit. Disadvantages: May lead to more external fragmentation as small leftover spaces can accumulate. Memory Allocation Algorithms 3- Worst Fit: Description: The worst-fit algorithm allocates the largest available block, leaving behind the largest leftover fragment. Advantages: Tends to create large free spaces for future allocations. Disadvantages: Can result in significant internal fragmentation. Memory Allocation Algorithms The choice of a memory allocation algorithm depends on various factors, including the nature of the applications running on the system, the types of memory requests, and the overall performance goals of the system. Different algorithms have different trade-offs in terms of speed, efficiency, and complexity. Thanks