Demand Paging and Page Replacement Lecture Notes PDF
Document Details
Uploaded by RicherSatellite5098
Temple University
Tags
Summary
These lecture notes cover the topics of demand paging and page replacement, key concepts in operating systems. They discuss the basic process of demand paging, performance considerations, and various page replacement policies. Examples and diagrams illustrate the concepts.
Full Transcript
Agenda November 14 – Lecture 23 Reminders Read 3 EP: Chapters 20 – 22 Virtual Memory Module 8 Project 4 – Driving Simulation using threads and concurrency Demo Due 11/20 Project 5 – Implementing a Simple File System 2nd deliverable 11/20 Attendance Quiz Review Quiz 11/20...
Agenda November 14 – Lecture 23 Reminders Read 3 EP: Chapters 20 – 22 Virtual Memory Module 8 Project 4 – Driving Simulation using threads and concurrency Demo Due 11/20 Project 5 – Implementing a Simple File System 2nd deliverable 11/20 Attendance Quiz Review Quiz 11/20 1 The inode in Linux is an example of simple indexed allocation of file blocks. With the ’small number of extents’ method of file space allocation on a disk, the link to the associated extent is stored as the last data element of the contiguous data segment linking to the extent. With virtual addressing that involves paging of the page tables, a program is potentially going to make up to ____ memory accesses to read from a memory cell 1,2,3,4 Last Time File Systems Project 5 Indexed Allocation Multi-Level Indexing / Linux iNode Demand Paging Page Replacement Today Demand Paging Page Replacement FIFO Optimal, LRU Basic Demand Paging 1. Code References a Page Not in memory 2. Page Fault invoked 3. Find the location of the desired page on disk 4. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame and remove its content 5. Bring the desired page into the (newly) free frame; update the page and frame tables 6. Restart the process (and the instruction that faulted) Note: page not yet in TLB Page Fault Demand Paging Process virtual address physical address page# instruction MMU frame# PT offset retry exception page fault frame# Operating System offset update PT entry Page Fault Handler load page from disk scheduler Evaluate the Performance of Demand Paging Effective access time for a demand paged memory: memory access time (ma) => (.002 microsec ->.2 microsec) If there are no page faults then 2x this is ‘effective access time’ (page table access included) If there is a fault, must find, then read to memory the relevant page, and then access the cell Probability of having a page fault: p {0 < p < 1} p should be close to 0 for few page faults Effective access time = (1-p) * 2* ma +p * (page_fault_time + 2 *ma) Demand Paging A Page Fault produces the following time consuming acts Trap to OS (undefined address error) Save registers & active state (ISR) Determine the cause of interrupt ( it was a page‘fault’) Check that the page reference was legal & get the page location on the storage device Change in process Issue a read TO a free frame [make a request to I/O & return] state and Wait in queue for read servicing This is time before initiation of Wait for device seek + latency we can continue in transfer Begin transfer this thread performed from CPU allocated to another task while I/O progresses the device driver Interrupt from I/O done Save registers & CPU info (partial context) of current task (ISR) Determine that interrupt came from the I/O store Correct page table & other tables to show page now in memory Wait for CPU allocation for the process that faulted Restore context & execute instruction again Performance of Demand Paging Defining Page_Fault_Time Performance of Demand Paging Summary of Operations 1. Service page fault (PF) Interrupt 2. Locate and Swap in page 3. Restart process 1 & 3 are software controlled: 5 - 100 microseconds 2 is much longer : moving head disks seek, latency, etc. transfer time >10 milliseconds; + queuing times for already waiting I/O Demand Paging Example Memory access time = 200 nanoseconds Average page-fault service time = 8 milliseconds [assuming access via TLB, for 1st access] EAT = (1 – p) * 200 + p * (8 milliseconds + 200 +200) = (1 – p) * 200 + p * 8,000,400 = 200 + p * 8,000,200 If one access out of 1,000 causes a page fault, then EAT = 8.2 microseconds. This is a slowdown by a factor of 40!! 10% degradation in memory access: 220 > 200 + 8,000,200 * p 20 > 800020000 * p P< 0.0000025 , about 1 fault every 400k memory references What happens if there is no free frame to load a page in response to a page fault ? Page replacement – find some page in memory, that is “not really in use”, overwrite or swap it out Algorithm - how to choose this ‘victim’page performance – want a replacement algorithm which will result in a minimum number of page faults ?? To replace modified pages: need to 1st WRITE back to disk. Introduce a modify bit in PT Use modified (‘dirty’) bit to reduce overhead of page transfers Demand Paging Modern programs require a lot of physical memory Memory per system growing faster than 25%-30%/year But they don’t use all their memory all of the time 90-10 rule: programs spend 90% of their time in 10% of their code Wasteful to require all of user’s code to be in memory Solution: use main memory as “cache” for disk Processor caching paging Control Tertiary Second Main Secondary Storage Cache On-Chip Level Memory Storage (TAPE?) Datapath Cache (DRAM) (SSD/DISK) (SRAM) Management & Access to the Memory Hierarchy Managed in Managed in Software - OS Hardware Processor TLB PT PT PT Secondary Secondary Storage TLB Main Storage (Disk) Memory (SSD) (DRAM) Accessed in Hardware 100,000 10,000,000 Speed (ns): 0.3 1 3 10-30 100 (0.1 ms) (10 ms) Size (bytes): 100Bs 10kBs 100kBs MBs GBs 100GBs TBs Demand Paging as Caching, … What “block size”? - 1 page (e.g, 4 KB) How do we locate a page? First check TLB, then page-table traversal What happens on a miss? Go to lower level to fill miss (i.e. disk) What happens on a write? (write-through/write back?) Definitely write-back – need dirty bit! What is page replacement policy? (i.e. LRU, Random…) This requires more explanation… (kinda LRU) TLB Page Table Disk Physical Virtual Memory 500GB Memory 512 MB 4 GB Disk is larger than physical memory In-use virtual memory can be bigger than physical memory Combined memory of running processes much larger than physical memory More programs fit into memory, allowing more concurrency Principle: Transparent Level of Indirection (page table) Supports flexible placement of physical data Data could be on disk or somewhere across network Variable location of data transparent to user program Performance issue, not correctness issue Illusion of Infinite Memory Page Replacement - 1 When all frames are occupied and a page fault occurs, one of the resident pages must be removed from memory to create a free frame. (Required to satisfy demand paging) Page replacement - overwriting a frame in memory with a different page loaded from the disk when needed. Moving pages between memory and the disk very time consuming Number of data moves must be minimized. Page Replacement -2 When the page to be removed has not been modified during execution an exact copy still exists on the disk [requires only 1 I/O operation] If it has been modified, page must be copied back to the disk. [Requires 2 I/O operations] A modified-bit (m-bit) is a binary flag in each page table entry indicates whether the corresponding page has been modified during execution set to 1 (modified) automatically by any instruction that stores (writes) data into the page Use of Modified Bit All Frames Filled 5 written to If Choose disk before 5 to reused replace Basic Demand Paging 1. Code References a Page Not in memory 2. Page Fault invoked 3. Find the location of the desired page on disk 4. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame and remove its content 5. Bring the desired page into the (newly) free frame; update the page and frame tables 6. Restart the process (and the instruction that faulted) Note: page not yet in TLB Page Replacement Policies Why do we care about Replacement Policy? Replacement is an issue with any cache Particularly important with pages The cost of being wrong is high: must go to disk Must keep important pages in memory, not toss them out FIFO (First In, First Out) Throw out oldest page. Bad – throws out heavily used pages instead of infrequently used RANDOM: Pick random page for every replacement Solution for some TLB’s. Simple hardware Pretty unpredictable – makes it hard to make real-time guarantees MIN (Minimum): Replace page that won’t be used for the longest time Great (provably optimal), but can’t really know future… But past is a good predictor of the future … Replacement Policies (Con’t) LRU (Least Recently Used): Replace page that hasn’t been used for the longest time Programs have locality, so if something not used for a while, unlikely to be used in the near future. Seems like LRU should be a good approximation to MIN. How to implement LRU? Use a list: Head Page 6 Page 7 Page 1 Page 2 Tail (LRU) On each use, remove page from list and place at head LRU page is at tail Problems with this scheme for paging? Need to know immediately when page used so that can change position in list… Many instructions for each hardware access In practice, people approximate LRU (more later) Example: FIFO Suppose we have 3 page frames, 4 virtual pages, and following reference stream: ABCABDADBCB Consider FIFO Page replacement: Ref: A B C A B D A D B C B Page: 1 A D C 2 B A 3 C B FIFO: 7 faults When referencing D, since we need A immediately, replacing A is bad choice FIFO does not consider when a page was/or will be used Example: MIN Suppose we have the same reference stream: ABCABDADBCB Consider MIN Page replacement: Ref: A B C A B D A D B C B Page: 1 A C 2 B 3 C D MIN: 5 faults Where will D be brought in? Look for page not referenced farthest in future What will LRU do? Same decisions as MIN here, but won’t always be best! Consider the following: A B C D A B C D A B C D LRU Performs as follows (same as FIFO here): Ref: A B C D A B C D A B C D Page: 1 A D C B 2 B A D C 3 C B A D Every reference is a page fault! Fairly contrived example of working set of N+1 pages on N frames Is LRU guaranteed to perform well? When will LRU perform badly? Consider the following: A B C D A B C D A B C D LRU Performs as follows (same as FIFO here): Ref: A B C D A B C D A B C D Page: 1 A D C B 2 B A D C 3 C B A D Every reference is a page fault! MIN Does much better: Ref: A B C D A B C D A B C D Page: 1 A B 2 B C 3 C D Page Faults Versus The Number of Frames Does adding memory reduce number of page faults? General Expectation: As a process has more frames, it should fault less But No: Bélády’s anomaly Certain replacement algorithms (e.g. FIFO) don’t have this obvious property! Page Replacement Algorithms Objective: Want lowest page-fault rate (for the future) – in other words, find a replacement algorithm that will not lead to higher rates of page faults (fewer future faults). Evaluate an algorithm by running it on a particular “string” of memory page references (‘page reference string’) and computing the number of page faults for that string In all our examples, the reference string is [sequence of page # references] 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 Demo Page Reference String Then Test with 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 To compare algorithms Beginning with a discussion of page replacement algorithms with a Fixed Numbers of Frames Demo Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 3 frames (max of 3 pages can be in physical memory at a time per process) 1 1 4 5 2 2 1 3 9 page faults 3 3 2 4 1 1 5 4 4 frames 2 2 1 5 10 page faults 3 3 2 Belady’s Anomaly: more 4 frames → more page faults 4 3 First-In-First-Out (FIFO) Algorithm Using the example data FIFO Illustrating Belady’s Anomaly FIFO Page Replacement Test Reference String 15 Faults FIFO The algorithm is easy to implement, requiring only a single pointer to designate the oldest page in memory. When the page is replaced, the pointer is advanced to the next frame modulo n (# frames). FIFO Takes advantage of locality [a positive] In programs, most instructions execute sequentially Branch is very small % of instructions Likelihood of referencing page from distant past, diminishes with time Large data structures often referenced sequentially What could be the best we can do in terms of an algorithm? Min (Optimal) Algorithm Replace page that will not be used for longest period of time 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 1 4 6 page 2 faults 3 4 5 How can you determine this when a program is running? Used for measuring how well your algorithm performs Min (Optimal) Page Replacement 9 Faults Since we cannot implement this, Can we approximate the behavior?