Plat Tech Prefi Reviewer PDF
Document Details
Uploaded by UndisputableNaïveArt58
STI College
Tags
Summary
This document is a handout on memory management, covering topics such as memory requirements management, types of addresses, commonly used terms, and requirements memory management intends to satisfy. It also includes discussions on memory partitioning techniques and process scheduling.
Full Transcript
PLAT TECH PREFI REVIEWER 05 HANDOUT 1 (Memory Management) Memory Requirements Management The functionality of an operating system (OS) that manages the primary memory, which includes the movement of processes back and forth between the main and the secondary memory during execution. ...
PLAT TECH PREFI REVIEWER 05 HANDOUT 1 (Memory Management) Memory Requirements Management The functionality of an operating system (OS) that manages the primary memory, which includes the movement of processes back and forth between the main and the secondary memory during execution. Tracks every memory location, whether it is allocated or free, and checks the amount of memory to be allocated for each process. Facilitates the process allocation and correspondingly updates the memory status. Types of addresses used in a program before and after memory allocation: Symbolic addresses – These are the addresses used in a source code. The variable names, constants, and instruction labels are the basic elements of a symbolic address space. Relative addresses – These are the addresses at the time of compilation. A compiler converts symbolic addresses into relative addresses. Physical addresses – These addresses are generated by the loader when a program is loaded into the main memory. Commonly used terms in memory management: Frame – This pertains to a fixed-length block of main memory. Page – This refers to a fixed-length block of data that resides in the secondary memory. A page of data may temporarily be copied into a frame in the main memory. Segment – A variable-length block of data that resides in the secondary memory. An entire segment may temporarily be copied into an available region of the main memory, or the segment may be divided into pages, which can be individually copied into the main memory Swapping – It is a mechanism in which a process can be swapped temporarily out of the main memory to the secondary memory in order to make memory space available for other processes. It is also known as memory compaction. Fragmentation – This occurs due to the continuous loading and removal of processes in the memory, wherein free memory spaces are broken down into smaller fragments. This happens when processes cannot be allocated to memory blocks because of their small size and memory blocks remains unused. o External fragmentation – This occurs when the allotted memory blocks are of varying sizes. This builds up due to the continuous removal of processes in the main memory. o Internal fragmentation – This occurs when the allotted memory blocks are of fixed size, and specific processes need more space or less space than the size of the allotted memory block. Requirements that memory management intends to satisfy: Relocation: The possibility that the program may be moved within the main memory due to swapping should always be considered. In addition, swapping active processes in and out of the main memory to maximize processor utilization is also highly observed. Protection: Each process should be protected against unwanted interference by other processes, whether accidental or intentional. Thus, programs in other processes should not be able to reference memory locations in a process, for reading or writing purposes, without permission. Sharing: Any protection mechanism must have the flexibility to allow several processes to access the same portion of the main memory. If a number of processes are executing the same program, it is advantageous to allow each process to access the same copy of the program rather than have its own separate copy. Logical organization: Most programs are organized into modules, some of which are unmodified, read-only or execute-only, and some of which contain data that are modifiable. It would be a great advantage if the operating system and computer hardware could effectively manage user programs and data in modules. Physical organization: The task of moving and organizing information flow between the main and the secondary memory should be a system responsibility. This task is the essence of memory management. Computer memory is organized into at least two (2) levels, which are as follows: o Main memory – This is a volatile memory that provides fast access at a relatively high cost. The main memory technically holds programs and data that are currently in use. o Secondary memory – This is usually a non-volatile memory at a cheaper cost with slower access. A secondary memory of large capacity is a good storage of programs and data for a longer term. Memory Partitioning Technique Fixed Partitioning - The main memory is divided into a number of static partitions at system generation time, wherein a process may be loaded into a partition of equal or greater size. Nowadays, the use of fixed partitioning is uncommon in the industry. It is simple to implement and produces only a minimal operating system overhead. There is a high chance of inefficient use of memory due to internal fragmentation. The maximum number of active processes that can be catered is fixed. Dynamic Partitioning -The partitions are created dynamically, wherein each process is loaded into a partition of exactly the same size as the process. This technique has no internal fragmentation, which results in a more efficient use of the main memory. Inefficient use of the processor can transpire due to the need for memory compression to counter external fragmentation. Three (3) possible placement algorithms that can be considered in implementing dynamic partitioning: Best-fit – This chooses the block that is closest to the requested size. First-fit – This scans the memory from the beginning and chooses the first available block that is large enough to cater to the process. Next-fit – This scans the memory from the location of the last placement and chooses the next available block that is large enough to cater to the process. Simple Paging - The memory is divided into a number of equally sized frames, while each process is divided into a number of equally sized pages of the same length as the frames. Then, a process is loaded by adding all of its pages into the available frames. It plays an important role in implementing virtual memory. This technique reduces external fragmentation. This technique may cause a small amount of internal fragmentation. Simple Segmentation - Each process is divided into a number of segments, wherein a process is loaded by adding all of its segments into dynamic partitions. There is no internal fragmentation in using this technique, resulting to an improved memory utilization and reduced overhead compared to dynamic partitioning. External fragmentation is a drawback of this technique. Virtual Memory An abstraction of the main memory, providing processes and the kernel with their own, almost infinite, private view of the main memory. Virtual memory is made possible through the support from both the processor and operating system. It supports multitasking, allowing processes and the kernel to operate on their own private address space without worrying about contention. It supports oversubscription of the main memory, allowing the operating system to transparently map virtual memory between the main and the secondary memory, as needed. It is not a real memory and most operating systems map virtual memory to real memory only on demand (when the memory is first populated or written). Virtual Memory Paging - This partitioning method is similar to simple paging, but does not necessarily load all of the pages of the process. This has no external fragmentation and encompass large virtual address space, that may lead to a higher degree of multiprogramming. Virtual Memory Segmentation - This partitioning method is similar to simple segmentation, but does not necessarily load all the segments of a process. This has no internal fragmentation and encompass a large virtual address space, that may also lead to a higher degree of multiprogramming. 06 HANDOUT 1 (Scheduling) Process Scheduling The act of selecting a job or a task that is to be dispatched. In some operating systems, other units of work, such as the input/output (I/O) operations, may also be scheduled. Handles the removal of a running process from the central processing unit (CPU), followed by the selection of another process on the basis of a particular strategy or algorithm. Process scheduling queues: Job queue – This queue contains all the processes in the system. Ready queue – This queue keeps a set of all processes residing in the main memory, ready and waiting to execute. A new process is always added to this queue. Device queue – This queue contains processes waiting for a device to become available. Note that there are unique queues available for each I/O device. Process scheduling is an essential part of a multiprogramming operating system. Divide CPU time among active processes and threads and maintain a notion of priority in order to execute a more important job or task sooner. The scheduling of processes on processors and individual CPUs are performed by the scheduler, a key component of the OS kernel. Long-term scheduling – This involves a long-term scheduler, also known as a job scheduler, that determines which programs are admitted to the system for processing. Thus, it controls the degree of multiprogramming. The primary objective of the job scheduler is to provide a balanced mix of jobs. Short-term scheduling – This involves a short-term scheduler, also known as the dispatcher, that is invoked whenever an event that may lead to the blocking of the current process occurs. It is actually the change of the ready state to the running state of the process. The primary objective of the dispatcher is to increase the system performance in accordance with the chosen set of criteria. Medium-term scheduling – This is a part of the swapping function. The medium-term scheduler is in charge of handling the swapped-out processes. This actually reduces the degree of multiprogramming. Swapping: A running process becomes suspended if it makes an I/O request, wherein it cannot make any progress towards completion. In order to remove the process from the memory and make space for other processes, the suspended process is moved to the secondary storage. Hence, the process of swapping. Key scheduling criteria that are usually encountered in operating systems: Turnaround time (Tr): The interval of time between the submission of a process and its completion. This involves the time spent waiting in the ready queue, plus the actual execution time, plus the time spent waiting for resources, including the processor. Formula: Turnaround Time = Finish Time – Arrival Time Response time: The time from the submission of a request until the response begins to be received. In addition, the scheduling discipline should attempt to achieve low response time and maximize the number of users receiving the acceptable response time. Burst time (Ts): The amount of time required for a process to be executed by the CPU. This criterion is also called the execution time or running time. Waiting time: The scheduling algorithm also affects the amount of time that a process spends waiting in the ready queue. This is the sum of the periods spent waiting in the ready queue. Formula: Waiting Time = Turnaround Time – Burst Time Throughput: The scheduling algorithm should attempt to maximize the number of processes completed per unit of time. This is the measure of how much work is being performed by the processor. Processor utilization: The percentage of time that the processor is busy. For exclusive shared systems, this criterion is significant. In other systems, including single-user systems, this criterion is less important. Priority: The process priority can be modified dynamically by the scheduler to improve the performance of certain workloads. Note that when processes are assigned with specific priorities, the scheduling algorithm should favor higher-priority processes. Fairness: Processes should be treated the same, and no process should suffer from starvation. Resource balancing: The scheduling algorithm should keep the resources of the system busy. Processes that will underutilize stressed resources should be favored. In operating systems, workloads can be categorized as either of the following: CPU-bound – This includes applications that perform heavy compute operations, such as scientific and mathematical analysis, which are expected to have long runtimes. I/O-bound – This includes applications that perform input/output operations, such as web servers, file servers, and interactive shells, where low-latency responses are desirable. Process Scheduling Algorithm Non-preemptive: Once a process is in the Running State, it continues to execute until it terminates or it blocks itself to wait for an I/O operation or to request some OS service. Preemptive: The currently running processes may be interrupted and moved to the Ready State by the OS. The decision to preempt may be performed during the following circumstances: A new process arrives An interrupt occurs that places a blocked process in the Ready State First-Come First-Serve (FCFS) All incoming processes join the ready queue. Then, if the currently running process ceases to execute, the process that has been in the ready queue the longest is selected for running. FCFS performs better for long processes. It is a non-preemptive algorithm. It is easy to understand and implement. It is also identified as a strict queuing scheme. All jobs are executed on a first-come first- serve basis. The performance of this algorithm is poor since the average waiting time is high. Shortest Job First (SJF) Easy to implement in batch systems, where the required CPU time is known in advance. On the other hand, this algorithm is impossible to implement in interactive systems where the required CPU time is unknown. It is the best approach to minimize the waiting time. It is a non-preemptive algorithm. It is also called the shortest job next or shortest process next. This algorithm selects and executes the process with the shortest expected or estimated processing time. Thus, the processing time must be known in advance by the processor. Shortest Remaining Time First (SRTF) In this algorithm, when a new process with a shorter remaining time joins the ready queue, the scheduler may preempt the currently running process and execute the new one. The scheduler must have an estimate of the processing time to perform the selection function. In addition, the risk of starvation of long processes exists. It is a preemptive version of the SJF algorithm. The scheduler always chooses and executes the process that has the shortest expected remaining processing time. This does not have the bias in favor of long processes found in a first- come first-serve setup. There are no additional interrupts generated, reducing the overhead. The elapsed service time can be recorded, contributing to the overhead. Round Robin (RR) This algorithm involves the generation of a clock interrupt at periodic intervals. When the interrupt occurs, the currently running process is placed in the ready queue, and the next ready job is selected on a first-come first-serve basis. This technique is known as time slicing, because each process is given a slice of time before being preempted. It is a preemptive algorithm. Each process is provided with a fixed time to execute, called the quantum. Once a process has consumed all its allotted execution time, it is automatically preempted and another process executes for a given period of time. Non-Preemptive Priority (NPP) Scheduling In general, each system process is assigned with a corresponding priority, and the scheduler will always choose a process of higher priority. A particular challenge exists in implementing a pure priority-based scheduling, and that is: lower priority processes may suffer starvation. This can possibly occur if there is a constant and steady supply of higher-priority processes. On the other hand, the priority of a process can still change based on its age or execution history. It is a non-preemptive algorithm. It is the most common scheduling algorithm in batch systems. Each process is assigned with a priority and the process with the highest priority will always be executed first. Processes with the same priority level are executed on a first-come first- serve basis(Arrival time). The priority of a process can be based on memory requirements, time requirements, or any other resource requirements.