Podcast
Questions and Answers
What is the primary advantage of virtual memory?
What is the primary advantage of virtual memory?
- It reduces the size of the physical memory required by a system.
- It eliminates the need for memory management.
- It allows programs to execute even if their entire code is not in physical memory. (correct)
- It increases the speed of memory access for all programs.
How does demand paging improve system efficiency?
How does demand paging improve system efficiency?
- By increasing the amount of physical memory available to each process.
- By reducing I/O operations, loading pages only when they are needed. (correct)
- By predicting which pages will be used next and loading them in advance.
- By loading all pages of a process into memory at the start.
In demand paging, what is the role of a 'pager'?
In demand paging, what is the role of a 'pager'?
- To predict which pages will be needed next.
- To allocate physical memory frames evenly among running processes.
- To manage the swapping of individual pages between memory and storage. (correct)
- To swap entire processes into memory.
What is the purpose of the valid-invalid bit in a page table entry?
What is the purpose of the valid-invalid bit in a page table entry?
What is a 'page fault'?
What is a 'page fault'?
Which of the following steps occurs first when handling a page fault?
Which of the following steps occurs first when handling a page fault?
What is 'pure demand paging'?
What is 'pure demand paging'?
Which hardware support is essential for demand paging?
Which hardware support is essential for demand paging?
Why is instruction restart needed in demand paging?
Why is instruction restart needed in demand paging?
What is the purpose of a 'free-frame list'?
What is the purpose of a 'free-frame list'?
What is 'copy-on-write' (COW)?
What is 'copy-on-write' (COW)?
What happens when a page fault occurs and there are no free frames available?
What happens when a page fault occurs and there are no free frames available?
What is a 'dirty bit' used for in page replacement?
What is a 'dirty bit' used for in page replacement?
Which goal is most important when evaluating a page-replacement algorithm?
Which goal is most important when evaluating a page-replacement algorithm?
In the context of page replacement algorithms, what is a 'reference string'?
In the context of page replacement algorithms, what is a 'reference string'?
What is Belady's Anomaly?
What is Belady's Anomaly?
Which of the listed algorithms is considered an 'optimal' page replacement algorithm?
Which of the listed algorithms is considered an 'optimal' page replacement algorithm?
What is the main advantage of the Least Recently Used (LRU) page replacement algorithm?
What is the main advantage of the Least Recently Used (LRU) page replacement algorithm?
What is one way to implement the LRU page-replacement algorithm?
What is one way to implement the LRU page-replacement algorithm?
What is the purpose of the 'reference bit' in the LRU approximation algorithms?
What is the purpose of the 'reference bit' in the LRU approximation algorithms?
Which page replacement algorithm is the second-chance algorithm based on?
Which page replacement algorithm is the second-chance algorithm based on?
In the Enhanced Second-Chance Algorithm, what does the ordered pair (0, 1) represent?
In the Enhanced Second-Chance Algorithm, what does the ordered pair (0, 1) represent?
What is the main idea behind the Least Frequently Used (LFU) page replacement algorithm?
What is the main idea behind the Least Frequently Used (LFU) page replacement algorithm?
What is 'page buffering'?
What is 'page buffering'?
What is 'raw disk' mode?
What is 'raw disk' mode?
What is the concept of 'minimum number of frames' in the context of frame allocation?
What is the concept of 'minimum number of frames' in the context of frame allocation?
What is the difference between fixed allocation and priority allocation?
What is the difference between fixed allocation and priority allocation?
What is a characteristic of global page replacement?
What is a characteristic of global page replacement?
What is the main purpose of reclaiming pages using a free-frame list approach?
What is the main purpose of reclaiming pages using a free-frame list approach?
What does NUMA stand for in the context of memory access?
What does NUMA stand for in the context of memory access?
What is the performance benefit of allocating memory 'close to' the CPU in a NUMA system?
What is the performance benefit of allocating memory 'close to' the CPU in a NUMA system?
What is 'thrashing'?
What is 'thrashing'?
What is the 'locality model'?
What is the 'locality model'?
How does the operating system use the 'working-set model'?
How does the operating system use the 'working-set model'?
What is 'page-fault frequency (PFF)' used for?
What is 'page-fault frequency (PFF)' used for?
True or false: Kernel memory is treated the same as user memory.
True or false: Kernel memory is treated the same as user memory.
What is 'prepaging'?
What is 'prepaging'?
Which factor should be taken into account when deciding on a page size?
Which factor should be taken into account when deciding on a page size?
How is TLB Reach calculated?
How is TLB Reach calculated?
Why is 'pinning' pages used?
Why is 'pinning' pages used?
Flashcards
Virtual memory
Virtual memory
Separation of user logical memory from physical memory.
Virtual address space
Virtual address space
Logical view of how process is stored in memory, starting at address 0.
Demand paging
Demand paging
Only bring a page into memory when it is needed.
Pager
Pager
Signup and view all the flashcards
Lazy swapper
Lazy swapper
Signup and view all the flashcards
Valid-invalid bit
Valid-invalid bit
Signup and view all the flashcards
Page fault
Page fault
Signup and view all the flashcards
Page fault handling
Page fault handling
Signup and view all the flashcards
Free frame list
Free frame list
Signup and view all the flashcards
Zero-fill-on-demand
Zero-fill-on-demand
Signup and view all the flashcards
Pure demand paging
Pure demand paging
Signup and view all the flashcards
Swap space
Swap space
Signup and view all the flashcards
Copy-on-write
Copy-on-write
Signup and view all the flashcards
Zero-fill-on-demand pages
Zero-fill-on-demand pages
Signup and view all the flashcards
Page replacement
Page replacement
Signup and view all the flashcards
Victim frame
Victim frame
Signup and view all the flashcards
Frame-allocation algorithm
Frame-allocation algorithm
Signup and view all the flashcards
Page-replacement algorithm
Page-replacement algorithm
Signup and view all the flashcards
Reference string
Reference string
Signup and view all the flashcards
Belady's Anomaly
Belady's Anomaly
Signup and view all the flashcards
Optimal Algorithm
Optimal Algorithm
Signup and view all the flashcards
Least Recently Used (LRU)
Least Recently Used (LRU)
Signup and view all the flashcards
LRU Approximation
LRU Approximation
Signup and view all the flashcards
Second-chance algorithm
Second-chance algorithm
Signup and view all the flashcards
Counting Algorithms
Counting Algorithms
Signup and view all the flashcards
Lease Frequently Used
Lease Frequently Used
Signup and view all the flashcards
Most Frequently Used
Most Frequently Used
Signup and view all the flashcards
Page-Buffering Algorithms
Page-Buffering Algorithms
Signup and view all the flashcards
Minimum frame allocation
Minimum frame allocation
Signup and view all the flashcards
Equal allocation
Equal allocation
Signup and view all the flashcards
Porportional allocation
Porportional allocation
Signup and view all the flashcards
Global replacement
Global replacement
Signup and view all the flashcards
Local replacement
Local replacement
Signup and view all the flashcards
Non-Uniform Memory Access
Non-Uniform Memory Access
Signup and view all the flashcards
Thrashing
Thrashing
Signup and view all the flashcards
Locality model
Locality model
Signup and view all the flashcards
Working-Set Model
Working-Set Model
Signup and view all the flashcards
List of modified pages
List of modified pages
Signup and view all the flashcards
Pinning
Pinning
Signup and view all the flashcards
Prepaging
Prepaging
Signup and view all the flashcards
I/O Interlock
I/O Interlock
Signup and view all the flashcards
Study Notes
- Chapter 10 discusses virtual memory concepts
- Virtual memory separates user logical memory from physical memory allowing programs to be partially loaded into memory
Virtual Memory Benefits
- Enables execution of programs not entirely in memory
- Allows logical address space to exceed physical address space
- Facilitates address space sharing among processes
- Supports more efficient process creation and enables concurrent execution of programs
- Reduces I/O for process loading/swapping
Virtual Address Space
- Represents the logical view of how a process is stored in memory, typically starting at address 0
- Physical memory is organized into page frames
- MMU maps logical to physical addresses
- Implemented via demand paging or demand segmentation
Demand Paging
- Pages are brought into memory only when referenced
- Reduces I/O and memory needs
- Allows faster response
- Supports more users
- A "lazy swapper" (pager) only swaps pages when needed
- "Not-in-memory" pages trigger fetching/load into memory
Basic Concepts of Demand Paging
- Pager brings in specific pages, not guessed pages like in swapping
- New MMU functionality required for demand paging
- Memory-resident pages involve no difference from non-demand paging
- Non-memory-resident pages must be detected and loaded without altering program behavior or requiring programmer changes
Valid-Invalid Bit
- Each page table entry has a valid-invalid bit to indicate if a page is in memory (v) or not (i)
- Initially, set to i on all entries
- MMU checks this bit during address translation, triggering a page fault if it is i
Handling a Page Fault
- Reference to a page results in a trap to the OS as “Page fault”
- OS checks another table; if invalids abort, if valid but not in memory go to next state
- Find a free frame
- Swap the page into frame via scheduled disk operation
- Reset tables to indicate page is now in memory, and set validation bit to v
- Restart instruction that caused the page fault
Aspects of Demand Paging
- Extreme case is "pure demand paging," which starts processes with no pages in memory
- Locality of reference decreases access pain
- Hardware is needed, such as page table with valid/invalid bit, swap space, and instruction restart
- The Operating system sets the instruction pointer to the first instruction of process, resulting in a non-memory-resident leading to page fault
Free-Frame List
- OS brings desired pages into main memory from secondary storage when a page fault occurs
- Most OS maintain a "free-frame list," that is a pool of free frames, when such requests occur
- System typically allocates free frames to a technique known as a zero-fill-on-demand
- When a system starts up, all available memory is placed on the free-frame list
Demand Paging Stages
- Trapping to the OS
- Saving the user register and process state
- Determining that the interrupt was a page fault
- Checking the page reference legality and location of the page on the disk
- Issue a read from the disk to a free frame
- Wait in queue to device
- Wait for device seek and/or latency time
- Transfer page to a free frame
- Allocate the CPU to another user while waiting
- Recieve an interrupt from the I/O subsystem
- Save the registers and process state for the user
- Determine the interrupt came from the Disk
- Correct page and other tables to show the page is in memory
- Wait for the CPU to allocate to this process again
- Restore user registers, process state and new page table
- Resume interrupted instruction
Performance of Demand Paging
- Consists of servicing the interrupt, reading the page and restarting the process
- Page Fault Rate (p) has a range from 0 (no faults) to 1 (every reference is a fault)
- Effective Access Time (EAT) = (1 – p) x memory access + p (page fault overhead + swap page out + swap page in)
Instruction Restart
- Involves an instruction that could access several locations
- Auto-increment/decrement location
- Requires restart of the whole operation
- Handles overlapping source and destination
Demand Paging Optimizations
- Swap space I/O is faster than file system I/O
- Copy entire process image to swap space at process load time
- Demand page from program binary on disk and discard when freeing frame
- Still need to write to swap space such as pages that are non-file associated (anonymous memory) and pages modified but not yet written back
- Mobile systems
- Typically don't support swapping
- Demand page from the file system such as code and reclaim read-only pages
Copy-on-Write (COW)
- Allows parent and child processes to share pages initially in memory
- If either process modifies a page, only then is the page copied
- Pool needs free frames for demand page execution
- Variation call vfork()has the parent suspend and child using the copy-on-write for address space
Page Replacement
- Used to reduce memory allocation
- A modify (dirty) bit is used to reduce page transfer overhead
- Completes the logical memory and physical memory by providing large virtual memory over smaller physical memory
Basic Page Replacement
- Find the location of the desired page on disk
- Find a free frame
- If free use it
- if not select a victim frame using a page replacement algorithm
- Write victim frame to the disk if dirty
- Bring the desired page into the freed up page and update tables
- Restart instruction that caused the trap
- Note this potentially increases EAT due to page transfer for page fault
Frame-Allocation and Page Replacement Algorithms
- Frame-allocation algorithm determines how many frames given to each process and which frames can be replaced
- Page-replacement algorithm, want to minimize page-fault with lowest rate on both first access and re-access
- Evaluate algorithm by running it on memory references (reference string)
FIFO (First-In-First-Out) Algorithm
- Replace pages using a FIFO queue data structure
- Can be varying by reference string
- Causes Belady's Anomaly can cause more faults; a condition where adding more frames can cause more faults!
Optimal Algorithm
- Replace pages that will not be used for longest period of time
- Cannot be read due the inability to read the future
- Used for measuring the algorithms performance
LRU (Least Recently Used) Algorithm
- Replace page that needs the most of time
- Associates time of last use with each page
- Better than FIFO, but worse than OPT
- Generally used due the good algorithm
LRU (Least Recently Used) Algorithm Implementation
- Every entry has a counter, that copies the clock into each page the is referenced through copy each page
- Stack Implementation keeps stack of page #s in double linked list, allows each top to be moved
- But needs more pointers and is each update is more expensive with less searching
LRU and OPT
- Are cases of the stack algorithm that doesn't have Belady's Anomaly
- Stack records the most recent page references
LRU Approximation Algorithms
- LRU is expensive with its hardware
- Each page associates a bit by a = 0 initially
- Page switches, then switches = 1
- Reference will then equal 0
Second-Chance Algorithms
- Provides FIFO, with a hardware reference bit through clock replacement
- If a page has to be replaced
- Reference bit = 0 > replacement
- Reference bit = 1, then the statement will leave page in memory and replace it
Enhanced Second Chance Algorithms
- Algorithm improves by using its reference bit and modifying it in available concert
- Take reference, modify and the relationship
- Replace page in the lowest non-empty class, need to circular queue several times
Counting Algorithms
- Keeps the counter of each number of references made to a page
- This is not common
- Replaces the page with the smallest count
- The most frequent argument that the with the smallest count has just been brought in
Page-Buffering Algorithms
- Keep a of free frames, available when needed, evicting to the victim from the pool
- Possibly keep the list od modified pages
- When backing store, write pages and set non-dirty
Applications and Page Replacement
- Different all the OS algorithms are guess about the OS access each OS guesses
- Some apps have better knowledge, especially on the database
- Memory-intensive has double buffering
- System gives direct the disk from apps and will by-pass buffering
Allocation of Frames to processes
- Each process has to meet a standard number of frames
- Allocate frames to each process (fixed allocation)
- Allocate based on the size of process (proportional allocation)
- Dynamic has to deal with multiprogramming
Global vs Local Allocation
- Global allows the ability to another the process or set of allocation
- Local each process selects it with allocates and utilized memory
Reclaiming Pages
- Is policy of for the global pace
- Memory satisfied from free-frame list, before the is begin to select
- A strategy made to trigger when list drops, making there is always free space
Nun-Uniform Memory Access
- Assumed all access memory is equal
- Varying speeds of accessed memory
- Boards containing CPUs memory, connected through system bus
NUCA cont.
- Optimize with memory located "close to" threads
- And modifying, will make threads to connect with low-latency groups
- Allocates all the process thread and all the memory inside of the Igroup
Thrashing
- Low CPU is cause, and makes system think degree needs multiprogramming
- Other processes have to be in system
- If process doesn't have "enough" high and rate is very high
- Replace its frame back
- It to page will and the existing frame
Demand Paging and Thrashing
- Why work, process from One with either crossing
- Process, and if smaller than program means will thrashing
Working Set
- Working rate, fixes and example instructions Working sets change over time Peaks and valleys over time.
- Working list, helps suspend with a list of process
Keeping Track of Working Set
- Accurate with timer + is timer goes on for page
- While has 2 bits for page, copy and set from 1
- Why not completely accurate?
- If can't use timer, bits and increase timer
Page-Fault Frequency
- With direct approach through local process
- It page loses, is low and gains if high
Allocating Kernel Memory
- Treated well is to use a free memory
- Needs of memory are contagious and a certain size
- Through I/O devices
Other considerations-Prepaging
- Reduce the amount of of large pages that happen when the process starts
- All processes go through some needs before reference
- Assume some of the pages have wasted resources
- Is the code to save correct and the amount of wrong processes?
- Near 0, processes will lose
Page size
- OS have for built systems
- must have
- The correct table size
- TLB size for affectively
- I/O for overhead
- Number of pages to show
- 2 numbers from -to
- On, avg growing over time
TLB Reach
- From amount to the TLB with both sizing
- Set a work table for both process the TLB
- Will degree to add to faults
- Page sizes may, increase and require large increases
- Multiple to use page sides for an advantage with less fragmentation
Program Structure
- Structure data, through out Each section store on page
- Program 1 needs 128 data page
- Program 2 runs all the needed programs
//O Interlock
- Pages must be locked into the memory space from to time
- device must be locked from being selected by a
- page replacement algorithm
- Pinning of pages to locked into the memory
Priority Allocation
- Assign to numbers in and a size
- Assigns to the generate and the processes
- assign with lower programs
- assign with lower names
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.