Operating System Functions: Memory Management PDF

Document Details

AdulatoryTaiga

Uploaded by AdulatoryTaiga

Asia Pacific Institute of Information Technology (APIIT)

Tags

operating systems memory management computer science computer architecture

Summary

This document explores operating system functions related to memory management. It details user and system perspectives, fundamental operating system functions concerning hardware and software, and memory management concepts like hardware resources, process scheduling, and the role of RAM.

Full Transcript

Created by Turbolearn AI Operating System Functions: Memory Management What is an Operating System? The operating system (OS) can be viewed from two perspectives: User's perspective: A set of instructions that tells the computer what to do, how to do it, and when to do it. It schedules a...

Created by Turbolearn AI Operating System Functions: Memory Management What is an Operating System? The operating system (OS) can be viewed from two perspectives: User's perspective: A set of instructions that tells the computer what to do, how to do it, and when to do it. It schedules and organizes all computer activities. System operation perspective: A manager of hardware and software resources, enabling user programs to function. Three fundamental OS functions: 1. Accepting commands from the user. 2. Managing hardware resources for user programs. 3. Managing all running software programs. Hardware Resources Hardware resources are shareable; different programs can use the same hardware at different times. The OS coordinates these shared resources efficiently. Examples of shareable resources: Storage hardware Camera Microphone Process Scheduling and Resources Page 1 Created by Turbolearn AI A process is an application or program instruction in execution. The OS and CPU execute instructions on behalf of the software. Two crucial resources for every instruction: CPU time: The time a program spends using the CPU. Memory space: The space in memory for program data and instructions. CPU cores: Modern computers have multiple CPU cores (dual-core, quad-core, octa-core, etc.), but the OS typically treats them as a single unit. The OS ensures fair CPU time allocation among running applications. Memory Management What is RAM? RAM (Random Access Memory) is the computer's main memory. Every program needs to be loaded into RAM before it can be executed. Think of it like loading a game cartridge into a game console. Loading and Executing Programs The OS loads programs from storage (hard drive) into RAM. The OS provides an interface (like an application drawer) to select and launch programs. Launching a program involves instructing the OS to load it from storage into RAM. Memory Usage Monitoring The OS tracks memory usage for each running program. Memory pressure indicates how efficiently memory is being used. High memory pressure can lead to performance issues. Example metrics include green (efficient), yellow (moderate pressure), and red (high pressure). Memory Components in Activity Monitor The Activity Monitor (or equivalent system monitor) displays: Page 2 Created by Turbolearn AI Total RAM: The amount of RAM installed. RAM in use: The amount of RAM currently used by running programs. Memory pressure: An indication of how efficiently memory is utilized. Swap space: Used when RAM is full; slower than RAM. Cache files: Temporary files to improve program performance. Memory Management Study Guide Memory Management Basics Memory management is defined as the management of memory, a crucial resource for all applications. The amount of memory needed varies depending on the application size; large programs require more memory than smaller ones. Two fundamental resources applications require are CPU time (covered in the previous lecture) and memory space. By the end of this lesson, you should be able to: Describe terms related to memory management (e.g., paging, segmentation). Perform page replacement calculations. Logical vs. Physical Memory Physical Memory: Refers to the tangible hardware (RAM chip) and its available space. Programs occupy this space. Think of a classroom with 200 seats – the seats represent the available memory. Physical memory is the actual RAM in your computer. Logical Memory: A way of viewing or organizing memory. It's like an attendance list: it represents the contents of the classroom (physical memory) but organizes it differently (e.g., alphabetically). The order in the list doesn't reflect the physical arrangement. Logical memory is how the operating system sees the memory. It may organize it differently than the physical layout for efficiency. The operating system manages logical memory, translating logical addresses to physical addresses for data retrieval. Page 3 Created by Turbolearn AI Example: Shopping online. The website (logical) represents the physical store. Searching the website is easy; finding an item in the physical store requires more effort. When you order something, the website translates your selection (logical address) to the physical location of the item in the store so it can be retrieved and delivered. The fetch-decode-execute-store cycle relies on this translation: fetching involves finding the physical location of data/instructions in memory. Memory Partitioning Imagine an empty plot of land (memory). Leaving it empty is inefficient. Partitioning it into designated parking spaces (memory partitions) improves organization and efficiency. Unpartitioned Memory: Like an empty parking lot – disorganized and difficult to manage. Partitioned Memory: A parking lot with designated spaces and roads for access – organized and efficient. Each car (program) has a designated space. This analogy explains how memory is partitioned to allow programs to run concurrently. Memory Management: Fixed vs. Variable Partitioning Parking Lot Analogy The concept of memory partitioning can be understood through a parking lot analogy. Imagine a parking lot with 10 spots. Each spot has a unique address (1-10). By tracking which spots are occupied and free, we can manage the parking lot's capacity and occupancy. This is similar to how an operating system manages RAM. RAM and Partitioning Page 4 Created by Turbolearn AI RAM (Random Access Memory): A physical device with electrical components that stores data. Operating System's Role: The OS partitions RAM to allocate data effectively. Partitioning Methods: Fixed Partitioning: All partitions are of equal size. Variable Partitioning: Partitions are of varying sizes. Fixed Partitioning In fixed partitioning, RAM is divided into equally sized blocks. Large programs are broken into smaller chunks to fit into multiple blocks. Example: A program needing 5 blocks will occupy 5 contiguous blocks. Different colored programs can occupy different sets of blocks. Variable Partitioning In variable partitioning, the available RAM is treated as a single, continuous space. When a program requests memory, the system allocates a space of the required size, dynamically creating a partition. As programs finish, their memory is released, creating free space. Example: A small program takes a small block, a large program takes a larger block. The size of the block adjusts to the program's needs. This is like a bus with varying-sized seats. Fixed vs. Variable Partitioning: A Comparison Feature Fixed Partitioning Variable Partitioning Partition Size All partitions are the same size. Partitions vary in size. Memory Programs are divided into chunks Space is allocated dynamically to Allocation to fit partitions. program's needs. Space Can lead to internal Can lead to external fragmentation Utilization fragmentation (wasted space). (scattered space). Flexibility Less flexible More flexible Variable Partitioning Clarification The initial parking lot example is not an accurate representation of variable partitioning. In a true variable partitioning system, available memory is seen as an empty space. As programs arrive, the required space is allocated in real-time. Page 5 Created by Turbolearn AI Process: 1. A program requests space. 2. The system determines the needed space. 3. The system allocates the space. 4. As programs terminate, the freed space can be reallocated. Bus Analogy Variable partitioning is like a bus. Passengers (programs) board and alight at different stops, occupying varying numbers of seats (memory space). The space left behind becomes available to other passengers. A bus with fixed seats is analogous to fixed partitioning. Memory Management: Fixed vs. Variable Partitioning ‍ Fixed Partitioning: Imagine assigning fixed-size seats in a room. A large person won't fit in a small seat; they need a larger space. This is analogous to fixed partitioning in memory, where processes are assigned fixed memory blocks. Variable Partitioning: In contrast, variable partitioning is like having adjustable seating. A large person can be accommodated by combining smaller spaces. This is more efficient in memory allocation. Virtual Memory Explained Virtual Memory: Computer-generated memory that expands available memory by borrowing space from the hard drive. It allows a system to run more programs than its physical RAM would normally allow. Page 6 Created by Turbolearn AI The Concept: Think of it like renting a van for a large family gathering instead of buying a van you'll use only once a year. It’s a more cost-effective solution. The hard drive is the cheap, large "van" and RAM is the expensive, smaller "saloon car." How it Works: When RAM is full, the system borrows space (e.g., 2GB) from the hard drive, creating virtual memory (or swap space). If a new program needs to be loaded, an existing, unused program is swapped out (moved from RAM to the hard drive). The new program is then swapped in (loaded into the freed RAM space). When the new program finishes, its space is freed, and the previously swapped-out program can be swapped back in. This swapping is managed by the operating system's memory manager. Example: Imagine your RAM as a table. If it's full of tools, you move some aside to make room for new ones. This is analogous to swapping in and out processes. Logical vs. Physical Memory: The CPU "sees" the combined RAM and virtual memory (e.g., 10GB: 8GB physical + 2GB virtual), but only the physical memory (8GB) is actually used directly. The operating system handles the swapping between RAM and the hard drive. Page and Memory Management Techniques The lecture will continue with a discussion of paging and segmentation, crucial memory management techniques. This will build upon the foundation of virtual memory discussed above. The comparison between the user’s logical memory and physical memory was then further elaborated by using the analogy of an attendance list versus the physical arrangement of students in a classroom. The logical representation (the list) helps manage the physical arrangement. Similarly, the CPU uses the logical representation of memory (including virtual memory) while the OS handles the physical aspects of memory management. Virtual Memory and Memory Management Page 7 Created by Turbolearn AI Virtual Memory vs. Physical Memory Virtual memory is a system that allows programs to use more memory than is physically available. It works by swapping parts of programs between RAM and the hard drive (swap space). This allows for the execution of programs larger than the physical RAM. The operating system maintains a list of all running programs in memory (an "attendance list") and their physical locations (some might reside on the hard drive in swap space). The Role of Garbage Collection In modern operating systems, garbage collection is done automatically. The operating system identifies and removes data or processes that are no longer in use or have no references. This reclaims memory space. Process Control Block (PCB) The PCB acts like an "ID card" for each running program. It contains information about the program's resource usage, allowing the operating system to track what resources each program is using. This is crucial for garbage collection. Garbage Collection in Action A process (e.g., p1) requires data (e.g., file x). The PCB for p1 contains a pointer to file x. The operating system examines all PCBs. If a file (or data) has no pointers from any process, it's considered "garbage". Garbage collection removes the unnecessary file or data from memory. Overlay Codes and Memory Management Page 8 Created by Turbolearn AI Older programming required manual memory management (loading and unloading program parts). Modern operating systems handle memory management automatically, eliminating the need for programmers to worry about space when writing overlay codes. Paging and Segmentation Virtual memory is implemented using demand paging or demand segmentation. These are two core techniques for managing virtual memory: Paging Paging partitions memory into fixed-size blocks called frames. Programs are also divided into fixed-size blocks called pages, with page size equaling frame size. Pages can be loaded into non-contiguous frames in memory. Segmentation Segmentation uses variable-sized partitions, allowing for more efficient use of memory in some cases. (Further details not provided in this segment). Fixed vs. Variable Partitioning Partitioning Description Analogy Type Memory is divided into predefined Seats in a bus (fixed size), or Fixed partitions (frames) of equal or variable a bench in a train (variable) size. Partitions are allocated dynamically Variable Bench in a train based on the program's needs. Demand Paging Page 9 Created by Turbolearn AI Demand paging is an advanced paging technique where pages are loaded into memory only when needed. (Further details not provided in this segment). Virtual Memory Techniques Paging Technique Definition: Involves partitioning RAM into fixed-size frames and a program into fixed-size pages. Frame size equals page size for easy swapping. This technique is crucial for implementing virtual memory, allowing the system to borrow hard drive space when RAM is full. The fixed sizes simplify swapping; finding space for a new program doesn't require searching for an equally sized program to remove. Efficiency: Fixed-size partitions make swapping more efficient than variable- sized partitions. The system can easily determine the number of pages needed and swap them in or out. Demand Paging Definition: An advanced form of paging where only needed pages are loaded. This contrasts with regular paging, which loads the entire program. Analogy: Imagine a library book. Regular paging is like borrowing the whole book, even if you only need one section. Demand paging is like borrowing only the section you need. Process: Pages are loaded as needed, and swapped out when no longer required. This maximizes memory usage. Example: A program with 10 pages might only use 2 pages at any given time. Demand paging loads these 2 pages, then swaps them for the next 2, and so on. Kitchen Analogy: Following a 10-page recipe. Instead of placing all 10 pages on the counter, you use only 2 pages at a time, swapping them out as needed. This minimizes counter space usage. Page 10 Created by Turbolearn AI Virtual Memory Definition: A technique to create the illusion of more memory than physically available. This is achieved by borrowing space from the hard drive (swap space) and swapping data between RAM and the hard drive. Example: If your counter is full (RAM), you move some items to the floor (hard drive) to make room for new items. You retrieve floor items when needed. User Control: While operating systems usually manage swap space automatically, some allow users to adjust the amount of virtual memory. This requires advanced user knowledge and system settings modification. Clarifications Question Answer A technique, not an allocation of addresses. It uses hard drive Definition of space as swap space to extend the usable memory beyond virtual memory physical RAM. User control of Typically managed by the OS, but adjustable in some systems swap space through advanced settings. Paging The paging technique partitions programs into pages, and memory into frames of equal size. This simplifies the swap in/swap out process. Needing 5 pages requires 5 frames; no complex space calculations are needed. Paging: A memory management scheme where both programs and memory are divided into fixed-size blocks (pages and frames, respectively) for efficient swapping. Virtual Memory and Swap Space Whether a computer with large RAM needs swap space depends on running operations. Even with free RAM, swap space might be used for efficiency. Available RAM may be fragmented (non-contiguous), hindering large program loading. Scattered pages can negatively impact caching. Page 11 Created by Turbolearn AI Swap Space: A portion of the hard drive used as an extension of RAM, allowing the system to store less frequently used data. Network Address Translation (NAT) The analogy between NAT and virtual memory isn't perfect. NAT maps network addresses, while virtual memory manages program access to physical memory. Virtual Memory and RAM Virtual memory uses the hard drive, not RAM, for swap space. The hard drive provides additional space when RAM is full. Swapping between RAM and the hard drive is slower than accessing RAM directly. Demand Paging: A virtual memory technique where pages are loaded into memory only when needed. Logical vs. Physical Memory Logical memory (program's view) is divided into pages. The page table maps logical pages to physical memory frames. Page Table: A data structure that translates logical addresses (page numbers) to physical addresses (frame numbers). Demand Paging vs. Regular Paging Demand paging: Loads only needed pages, reducing paging time and memory usage. Regular paging: Loads all pages of a program. Segmentation Segmentation views memory as a collection of segments (named blocks with lengths). A logical address includes segment name and offset. Unlike paging, programs are broken into segments, not pages. A segmentation table maps logical segments to physical memory locations. Segmentation: A memory management scheme that allows programs to be divided into logically named segments of varying sizes. Page 12 Created by Turbolearn AI Memory Segmentation Segment vs. Page Segments: Variable sizes, allowing for efficient partitioning of program functionalities. Each segment can be uniquely sized. Pages: Fixed sizes. Segments offer advantages by allowing different program functionalities to be partitioned into segments of varying sizes. This contrasts with pages, which always maintain a consistent, fixed size. This flexibility in segment size enhances the efficiency of memory management. Advantages of Segments Using segments allows for efficient memory utilization in programs with multiple functionalities. Each functionality can be a separate segment. These segments can be reused, minimizing redundant memory allocation. Consider a web browser. Each new tab is essentially a duplicate of the previous page—the same program operating on different datasets (different websites). Using segments, the browser doesn't need multiple copies of the core code for decoding website source code. One segment can be shared by all open tabs, significantly reducing memory usage. Shared Segmentation Example Consider a web browser rendering multiple web pages. Each page requires decoding the source code. Using segments, the decoding code is in one segment shared among all pages. Only the data being decoded (each website's unique source code) differs. This is analogous to having multiple copies of the same program open, but utilizing the same code across different datasets. The core operation (decoding) remains the same; only the input data varies. Page 13 Created by Turbolearn AI Process Segment 0 (Code) Segment 1 (Data) P1 25286 18003 P2 25286 68348 The table demonstrates shared segmentation. Processes P1 and P2 use the same segment 0 for code execution, indicating shared code. However, each process has its own data segment (Segment 1), reflecting different datasets for the same program. Page Faults Definition A page fault occurs when the CPU requests a page not currently in RAM (physical memory). A page fault is essentially an error triggered when the CPU needs a specific page of data, but that page is not readily available in the computer's primary memory (RAM). This absence necessitates fetching the page from secondary storage (e.g., the hard drive). Handling a Page Fault 1. Locate the missing page on the hard drive. 2. Find free space in RAM. 3. Copy the page from the hard drive into RAM. 4. Update the paging table (a data structure that maps virtual addresses to physical addresses) to reflect the new location of the page. 5. Restart the interrupted instruction. Page Replacement Page replacement happens when a page fault occurs and RAM is full. The system must swap out a page from RAM before swapping the needed page in. Page 14 Created by Turbolearn AI When the main memory (RAM) is completely occupied, and a page fault occurs (meaning a needed page isn't present), a page replacement process is triggered. This involves selecting a page currently in RAM to be removed (swapped out) to free up space for the missing page (swapped in). Lecture Notes: Page Replacement Algorithms Page Replacement Algorithms We need to find an available frame in memory and replace it with the frame we want to use. To free a frame, we use algorithms. Let's consider a scenario with a list of pages the operating system needs to load sequentially: 1, 4, 1, 6, 1, 6, 1, 6, 1, 6, 1. This is our reference string. A reference string is a sequence of page numbers that represents the order in which pages are requested by the CPU. Example: 3 Frames Let's assume we have a memory with only 3 frames (Frame 1, Frame 2, Frame 3), initially empty, and a reference string: 1, 2, 3, 4, 5, 6. The CPU requests page 1. It's not in memory, causing a page fault. The system loads page 1 into a frame. The CPU requests page 2. Another page fault occurs; page 2 is loaded. The CPU requests page 3. A third page fault; page 3 is loaded. Now memory is full. The CPU requests page 4. A page fault occurs, and we need an algorithm to decide which page to replace. Page Replacement Algorithms Three main algorithms are: Page 15 Created by Turbolearn AI First In, First Out (FIFO) Optimal Least Recently Used (LRU) FIFO FIFO FIFO replaces the oldest page in memory. In our example: Page 4 replaces page 1. Page 5 replaces page 2. Page 6 replaces page 3. Page Fault Calculation The number of page faults equals the system overhead—time spent locating data instead of processing it. More page faults mean lower efficiency. Think of a chef: more time sourcing ingredients means less time cooking. Example: FIFO Calculation with 3 Frames Let's analyze the FIFO algorithm for the reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. We have 3 frames. Page 16 Created by Turbolearn AI Reference String Frame 1 Frame 2 Frame 3 Page Fault 7 7 0 7 0 1 7 0 1 2 7 0 1 0 7 0 1 3 7 0 3 0 7 0 3 4 7 4 3 2 7 4 2 3 7 4 2 0 0 4 2 3 0 4 3 2 0 4 2 1 0 4 1 2 0 4 1 0 0 4 1 1 0 1 4 7 7 1 4 0 7 1 0 1 7 1 0 The table shows the page contents of each frame for each reference string entry, with "" indicating a page fault. This approach is used to calculate the total number of page faults for a given algorithm and reference string. First-In-First-Out (FIFO) Page Replacement Algorithm This section details the FIFO page replacement algorithm using a step-by-step example. Scenario Setup Page 17 Created by Turbolearn AI We begin with 3 empty frames and a sequence of page requests: 7, 0, 1, 2, 0, 3, 0, 3, 4, 2, 3, 0, 1, 2, 7, 0, 1. Step-by-Step Page Replacement The following table illustrates the FIFO algorithm's execution for each page request. A shaded cell indicates a page fault, meaning the requested page is not in memory. Step Page Request Frame 1 Frame 2 Frame 3 Page Fault Frame Evicted 1 7 7 Yes 2 0 7 0 Yes 3 1 7 0 1 Yes 4 2 7 0 1 Yes 7 5 0 2 0 1 No 6 3 2 0 1 Yes 2 7 0 3 0 1 No 8 3 3 0 1 No 9 4 3 0 1 Yes 1 10 2 3 0 2 No 11 3 3 0 2 No 12 0 3 0 2 No 13 1 3 0 2 Yes 3 14 2 1 0 2 No 15 7 1 0 2 Yes 2 16 0 1 0 7 No 17 1 1 0 7 No Page Fault: An event that occurs when a requested page is not present in the main memory. This leads to a page replacement to load the requested page. Total Page Faults The FIFO algorithm resulted in a total of 15 page faults in this example. The time it takes to handle each page fault would be factored in to get a total execution time. Page 18 Created by Turbolearn AI Belady's Anomaly and Page Replacement Algorithms Belady's Anomaly Belady's Anomaly: An anomaly where increasing the number of available memory frames can increase the number of page faults, instead of decreasing them as intuitively expected. This is particularly observed with the First-In, First-Out (FIFO) page replacement algorithm. Belady's anomaly demonstrates a counter-intuitive behavior of the FIFO algorithm. More memory doesn't always mean better performance. FIFO's Weakness: The FIFO algorithm replaces the oldest page in memory. This can lead to situations where frequently used pages are replaced unnecessarily, resulting in more page faults. Belady's Anomaly highlights this inefficiency. Example: A graph of page faults vs. number of frames might show an initial decrease in page faults as the number of frames increases. However, further increases in frames could paradoxically increase page faults before finally decreasing again. Optimal Page Replacement Algorithm Concept: The optimal page replacement algorithm looks into the future to determine which page to replace. It replaces the page that will be used furthest in the future. This approach guarantees the fewest number of page faults. The optimal algorithm is theoretically optimal but impractical for real- world use, as it requires knowing the future. Page 19 Created by Turbolearn AI How it Works: 1. Identify the next page request. 2. If the page is already in memory, no page fault occurs. 3. If the page is not in memory, a page fault occurs. 4. Determine when each page currently in memory will be used next. 5. Replace the page whose next use is furthest in the future. Example: Let's say the reference string is 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. If the memory has 3 frames, the optimal algorithm minimizes page faults by strategically replacing pages based on their future use. This example would demonstrate fewer page faults than FIFO. The optimal algorithm, while theoretically perfect, is not computationally feasible in a real-time system. Least Recently Used (LRU) Algorithm Concept: The LRU algorithm replaces the page that has not been used for the longest period. This algorithm is a practical alternative to the optimal algorithm because it uses past usage to infer future usage. It's based on the principle of locality of reference—recently used pages are more likely to be used again soon. LRU trades off some theoretical optimality for practical feasibility. How it Works: 1. Keep track of when each page was last used (e.g., using a timestamp). 2. When a page fault occurs, replace the page with the oldest timestamp. Example: Using the same reference string as before, LRU would maintain a list of pages and their last use times. The page with the oldest timestamp would be replaced when a page fault occurs. This results in fewer page faults than FIFO, though generally more than the optimal algorithm. LRU is computationally efficient and provides a good balance between performance and implementation complexity, making it suitable for real- world systems. Page 20 Created by Turbolearn AI Least Recently Used (LRU) Page Replacement Algorithm The Least Recently Used (LRU) algorithm replaces the page that has not been used for the longest period. Let's illustrate with an example: Suppose we have the following page access sequence: 7, 0, 3, 0, 4, 2, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. And we have a memory capacity of 3 frames. Initial State: The frames are empty. 7: Page fault. Load 7. Frames: 7, _, _ 0: Page fault. Load 0. Frames: 7, 0, _ 3: Page fault. Replace 1 (least recently used). Frames: 7, 0, 3 0: No page fault. Frames: 7, 0, 3 4: Page fault. Replace 7 (least recently used). Frames: 0, 3, 4 2: Page fault. Replace 0 (least recently used). Frames: 3, 4, 2 2: No page fault. Frames: 3, 4, 2 3: No page fault. Frames: 3, 4, 2 0: Page fault. Replace 4 (least recently used). Frames: 3, 2, 0 3: No page fault. Frames: 3, 2, 0 2: No page fault. Frames: 3, 2, 0 1: Page fault. Replace 3 (least recently used). Frames: 2, 0, 1 2: No page fault. Frames: 2, 0, 1 0: No page fault. Frames: 2, 0, 1 1: No page fault. Frames: 2, 0, 1 7: Page fault. Replace 0 (least recently used). Frames: 2, 1, 7 0: Page fault. Replace 1 (least recently used). Frames: 2, 7, 0 1: No page fault. Frames: 2, 7, 0 In this example, the LRU algorithm resulted in 12 page faults. This differs from FIFO (First-In, First-Out) which had 15 page faults and optimal which had 9 page faults for the same sequence. LRU is a good balance between performance and practicality. Thrashing Thrashing occurs when a high percentage of time is spent on page replacement, rather than actual processing. This leads to a significant decrease in CPU utilization. Page 21 Created by Turbolearn AI Think of a chef spending more time fetching ingredients than cooking. This is analogous to the computer spending more time swapping pages than executing instructions. This is highly inefficient and needs to be avoided. Page Faults A page fault happens when a process requests a page that is not currently in main memory (RAM). Even if there's free space in RAM, if the requested page isn't loaded, a page fault occurs. Imagine you're taking attendance, and you call out a student's name, but they're absent. The space in the classroom exists, but the student is not there. Similarly, a page fault occurs even with available RAM if the requested page is not present. Memory Management Concepts Segments: A logical division of a program. Frames: A portion of RAM. Pages: A portion of a program, the same size as a frame. A segment is a collection of pages. All processing occurs in RAM. The hard drive is used as secondary storage for pages not currently in use. A page is swapped into a frame when it's needed, and swapped out to make room for other pages. Practice Questions The lecture mentioned practice questions will be covered in tutorial sessions, covering memory management (partitioning systems and page replacement). These topics will be included in the examination. Page 22

Use Quizgecko on...
Browser
Browser