Virtual Memory Concepts

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What is the primary advantage of virtual memory?

  • It reduces the size of the physical memory required by a system.
  • It eliminates the need for memory management.
  • It allows programs to execute even if their entire code is not in physical memory. (correct)
  • It increases the speed of memory access for all programs.

How does demand paging improve system efficiency?

  • By increasing the amount of physical memory available to each process.
  • By reducing I/O operations, loading pages only when they are needed. (correct)
  • By predicting which pages will be used next and loading them in advance.
  • By loading all pages of a process into memory at the start.

In demand paging, what is the role of a 'pager'?

  • To predict which pages will be needed next.
  • To allocate physical memory frames evenly among running processes.
  • To manage the swapping of individual pages between memory and storage. (correct)
  • To swap entire processes into memory.

What is the purpose of the valid-invalid bit in a page table entry?

<p>To denote whether the page is currently residing in physical memory. (A)</p> Signup and view all the answers

What is a 'page fault'?

<p>An attempt to access a page that is not currently in physical memory. (D)</p> Signup and view all the answers

Which of the following steps occurs first when handling a page fault?

<p>The reference to the page traps to the operating system. (C)</p> Signup and view all the answers

What is 'pure demand paging'?

<p>Starting a process with no pages in memory. (C)</p> Signup and view all the answers

Which hardware support is essential for demand paging?

<p>A page table with valid/invalid bits. (C)</p> Signup and view all the answers

Why is instruction restart needed in demand paging?

<p>To resume the interrupted instruction after the page fault has been resolved. (C)</p> Signup and view all the answers

What is the purpose of a 'free-frame list'?

<p>To list all available physical memory frames that can be used for pages. (B)</p> Signup and view all the answers

What is 'copy-on-write' (COW)?

<p>A memory optimization technique where pages are shared between processes until one of them modifies a page. (A)</p> Signup and view all the answers

What happens when a page fault occurs and there are no free frames available?

<p>A page replacement algorithm is used to select a victim frame. (B)</p> Signup and view all the answers

What is a 'dirty bit' used for in page replacement?

<p>To indicate a page that has been modified and needs to be written back to disk. (D)</p> Signup and view all the answers

Which goal is most important when evaluating a page-replacement algorithm?

<p>Minimizing the page-fault rate. (C)</p> Signup and view all the answers

In the context of page replacement algorithms, what is a 'reference string'?

<p>A sequence of memory page accesses used to evaluate an algorithm. (B)</p> Signup and view all the answers

What is Belady's Anomaly?

<p>The unexpected increase in page faults when increasing the number of frames. (D)</p> Signup and view all the answers

Which of the listed algorithms is considered an 'optimal' page replacement algorithm?

<p>The algorithm that replaces the page that will not be used for the longest period of time (A)</p> Signup and view all the answers

What is the main advantage of the Least Recently Used (LRU) page replacement algorithm?

<p>It uses past knowledge of page usage to predict future usage. (B)</p> Signup and view all the answers

What is one way to implement the LRU page-replacement algorithm?

<p>Using a stack of page numbers. (D)</p> Signup and view all the answers

What is the purpose of the 'reference bit' in the LRU approximation algorithms?

<p>To approximate the time of the last page reference. (A)</p> Signup and view all the answers

Which page replacement algorithm is the second-chance algorithm based on?

<p>FIFO (A)</p> Signup and view all the answers

In the Enhanced Second-Chance Algorithm, what does the ordered pair (0, 1) represent?

<p>Not recently used and modified. (B)</p> Signup and view all the answers

What is the main idea behind the Least Frequently Used (LFU) page replacement algorithm?

<p>Replace the page that has been used least frequently. (D)</p> Signup and view all the answers

What is 'page buffering'?

<p>Keeping a pool of free frames to reduce the penalty of wrong victim frame selection. (B)</p> Signup and view all the answers

What is 'raw disk' mode?

<p>A mode where the operating system bypasses buffering and locking, giving applications direct access to the disk. (A)</p> Signup and view all the answers

What is the concept of 'minimum number of frames' in the context of frame allocation?

<p>The smallest number of frames a process can have allocated to it. (C)</p> Signup and view all the answers

What is the difference between fixed allocation and priority allocation?

<p>Fixed allocation assigns frames based on process size, while priority allocation assigns frames based on process priority. (D)</p> Signup and view all the answers

What is a characteristic of global page replacement?

<p>Selecting a replacement frame from the set of all frames, possibly taking a frame from another process. (D)</p> Signup and view all the answers

What is the main purpose of reclaiming pages using a free-frame list approach?

<p>To ensure there is always sufficient free memory to satisfy new requests. (D)</p> Signup and view all the answers

What does NUMA stand for in the context of memory access?

<p>Non-Uniform Memory Access. (A)</p> Signup and view all the answers

What is the performance benefit of allocating memory 'close to' the CPU in a NUMA system?

<p>It reduces memory access latency. (D)</p> Signup and view all the answers

What is 'thrashing'?

<p>A state where a process is busy swapping pages in and out, leading to low CPU utilization. (C)</p> Signup and view all the answers

What is the 'locality model'?

<p>A model of how processes tend to access memory in clusters. (D)</p> Signup and view all the answers

How does the operating system use the 'working-set model'?

<p>To monitor memory usage and prevent thrashing. (A)</p> Signup and view all the answers

What is 'page-fault frequency (PFF)' used for?

<p>To measure the rate at which page faults occur and adjust frame allocation. (C)</p> Signup and view all the answers

True or false: Kernel memory is treated the same as user memory.

<p>False. (B)</p> Signup and view all the answers

What is 'prepaging'?

<p>A strategy to pre-emptively load pages a process will likely need. (D)</p> Signup and view all the answers

Which factor should be taken into account when deciding on a page size?

<p>Fragmentation (B)</p> Signup and view all the answers

How is TLB Reach calculated?

<p>(TLB Size) * (Page Size) (C)</p> Signup and view all the answers

Why is 'pinning' pages used?

<p>To prevent them from being swapped out during I/O operations. (A)</p> Signup and view all the answers

Flashcards

Virtual memory

Separation of user logical memory from physical memory.

Virtual address space

Logical view of how process is stored in memory, starting at address 0.

Demand paging

Only bring a page into memory when it is needed.

Pager

Swaps pages, deals with pages.

Signup and view all the flashcards

Lazy swapper

Never swaps a page into memory unless page will be needed.

Signup and view all the flashcards

Valid-invalid bit

Indicates if a page is in memory or not.

Signup and view all the flashcards

Page fault

Reference to a page that is not in memory.

Signup and view all the flashcards

Page fault handling

The operating system must bring the desired page from secondary storage into main memory.

Signup and view all the flashcards

Free frame list

A pool of free frames for satisfying requests.

Signup and view all the flashcards

Zero-fill-on-demand

Technique where free frames are zeroed-out before allocation.

Signup and view all the flashcards

Pure demand paging

Start process with no pages in memory.

Signup and view all the flashcards

Swap space

Memory area for swapping data.

Signup and view all the flashcards

Copy-on-write

Allows parent and child processes to initially share the same pages in memory.

Signup and view all the flashcards

Zero-fill-on-demand pages

Free pages are allocated from a pool of these.

Signup and view all the flashcards

Page replacement

Finds a page in memory that is not really in use, and pages it out.

Signup and view all the flashcards

Victim frame

A page to be swapped out when a new page is needed.

Signup and view all the flashcards

Frame-allocation algorithm

Determines how many frames to give each process and which frames to replace.

Signup and view all the flashcards

Page-replacement algorithm

Algorithm to achieve lowest page-fault rate on both first access and re-access.

Signup and view all the flashcards

Reference string

String of memory references. Used for evaluating page-replacement algorithms.

Signup and view all the flashcards

Belady's Anomaly

Occurs when adding more frames causes more page faults with FIFO algorithm.

Signup and view all the flashcards

Optimal Algorithm

Replace page that will not be used for longest period of time.

Signup and view all the flashcards

Least Recently Used (LRU)

Replace page that has not been used in the most amount of time.

Signup and view all the flashcards

LRU Approximation

Approximates LRU using a reference bit.

Signup and view all the flashcards

Second-chance algorithm

Generally FIFO, plus hardware-provided reference bit.

Signup and view all the flashcards

Counting Algorithms

Keep a counter of the number of references that have been made to each page.

Signup and view all the flashcards

Lease Frequently Used

Algorithm which replaces the page with smallest count.

Signup and view all the flashcards

Most Frequently Used

Algorithm which replaces the page with the smallest count.

Signup and view all the flashcards

Page-Buffering Algorithms

Keep a pool of free frames, always.

Signup and view all the flashcards

Minimum frame allocation

Each process needs a minimum number of these.

Signup and view all the flashcards

Equal allocation

Allocate equal frames for each processor.

Signup and view all the flashcards

Porportional allocation

Allocate according to the size of process.

Signup and view all the flashcards

Global replacement

Process selects a replacement frame from the set of all frames.

Signup and view all the flashcards

Local replacement

Each process selects from only its own set of allocated frames.

Signup and view all the flashcards

Non-Uniform Memory Access

Speed of access to memory varies.

Signup and view all the flashcards

Thrashing

Process is busy swapping pages in and out.

Signup and view all the flashcards

Locality model

Process migrates from one locality to another.

Signup and view all the flashcards

Working-Set Model

Δ = working-set window = a fixed number of page references.

Signup and view all the flashcards

List of modified pages

When backing store otherwise idle, write pages there and set to non-dirty.

Signup and view all the flashcards

Pinning

Used on memory that needs to guaranteed avaibility.

Signup and view all the flashcards

Prepaging

Memory structure that reduces the large number of page faults that occurs at process startup

Signup and view all the flashcards

I/O Interlock

Used to lock file copying a file from device.

Signup and view all the flashcards

Study Notes

  • Chapter 10 discusses virtual memory concepts
  • Virtual memory separates user logical memory from physical memory allowing programs to be partially loaded into memory

Virtual Memory Benefits

  • Enables execution of programs not entirely in memory
  • Allows logical address space to exceed physical address space
  • Facilitates address space sharing among processes
  • Supports more efficient process creation and enables concurrent execution of programs
  • Reduces I/O for process loading/swapping

Virtual Address Space

  • Represents the logical view of how a process is stored in memory, typically starting at address 0
  • Physical memory is organized into page frames
  • MMU maps logical to physical addresses
  • Implemented via demand paging or demand segmentation

Demand Paging

  • Pages are brought into memory only when referenced
  • Reduces I/O and memory needs
  • Allows faster response
  • Supports more users
  • A "lazy swapper" (pager) only swaps pages when needed
  • "Not-in-memory" pages trigger fetching/load into memory

Basic Concepts of Demand Paging

  • Pager brings in specific pages, not guessed pages like in swapping
  • New MMU functionality required for demand paging
  • Memory-resident pages involve no difference from non-demand paging
  • Non-memory-resident pages must be detected and loaded without altering program behavior or requiring programmer changes

Valid-Invalid Bit

  • Each page table entry has a valid-invalid bit to indicate if a page is in memory (v) or not (i)
  • Initially, set to i on all entries
  • MMU checks this bit during address translation, triggering a page fault if it is i

Handling a Page Fault

  • Reference to a page results in a trap to the OS as “Page fault”
  • OS checks another table; if invalids abort, if valid but not in memory go to next state
  • Find a free frame
  • Swap the page into frame via scheduled disk operation
  • Reset tables to indicate page is now in memory, and set validation bit to v
  • Restart instruction that caused the page fault

Aspects of Demand Paging

  • Extreme case is "pure demand paging," which starts processes with no pages in memory
  • Locality of reference decreases access pain
  • Hardware is needed, such as page table with valid/invalid bit, swap space, and instruction restart
  • The Operating system sets the instruction pointer to the first instruction of process, resulting in a non-memory-resident leading to page fault

Free-Frame List

  • OS brings desired pages into main memory from secondary storage when a page fault occurs
  • Most OS maintain a "free-frame list," that is a pool of free frames, when such requests occur
  • System typically allocates free frames to a technique known as a zero-fill-on-demand
  • When a system starts up, all available memory is placed on the free-frame list

Demand Paging Stages

  • Trapping to the OS
  • Saving the user register and process state
  • Determining that the interrupt was a page fault
  • Checking the page reference legality and location of the page on the disk
  • Issue a read from the disk to a free frame
  • Wait in queue to device
  • Wait for device seek and/or latency time
  • Transfer page to a free frame
  • Allocate the CPU to another user while waiting
  • Recieve an interrupt from the I/O subsystem
  • Save the registers and process state for the user
  • Determine the interrupt came from the Disk
  • Correct page and other tables to show the page is in memory
  • Wait for the CPU to allocate to this process again
  • Restore user registers, process state and new page table
  • Resume interrupted instruction

Performance of Demand Paging

  • Consists of servicing the interrupt, reading the page and restarting the process
  • Page Fault Rate (p) has a range from 0 (no faults) to 1 (every reference is a fault)
  • Effective Access Time (EAT) = (1 – p) x memory access + p (page fault overhead + swap page out + swap page in)

Instruction Restart

  • Involves an instruction that could access several locations
  • Auto-increment/decrement location
  • Requires restart of the whole operation
  • Handles overlapping source and destination

Demand Paging Optimizations

  • Swap space I/O is faster than file system I/O
  • Copy entire process image to swap space at process load time
  • Demand page from program binary on disk and discard when freeing frame
  • Still need to write to swap space such as pages that are non-file associated (anonymous memory) and pages modified but not yet written back
  • Mobile systems
  • Typically don't support swapping
  • Demand page from the file system such as code and reclaim read-only pages

Copy-on-Write (COW)

  • Allows parent and child processes to share pages initially in memory
  • If either process modifies a page, only then is the page copied
  • Pool needs free frames for demand page execution
  • Variation call vfork()has the parent suspend and child using the copy-on-write for address space

Page Replacement

  • Used to reduce memory allocation
  • A modify (dirty) bit is used to reduce page transfer overhead
  • Completes the logical memory and physical memory by providing large virtual memory over smaller physical memory

Basic Page Replacement

  • Find the location of the desired page on disk
  • Find a free frame
  • If free use it
  • if not select a victim frame using a page replacement algorithm
  • Write victim frame to the disk if dirty
  • Bring the desired page into the freed up page and update tables
  • Restart instruction that caused the trap
  • Note this potentially increases EAT due to page transfer for page fault

Frame-Allocation and Page Replacement Algorithms

  • Frame-allocation algorithm determines how many frames given to each process and which frames can be replaced
  • Page-replacement algorithm, want to minimize page-fault with lowest rate on both first access and re-access
  • Evaluate algorithm by running it on memory references (reference string)

FIFO (First-In-First-Out) Algorithm

  • Replace pages using a FIFO queue data structure
  • Can be varying by reference string
  • Causes Belady's Anomaly can cause more faults; a condition where adding more frames can cause more faults!

Optimal Algorithm

  • Replace pages that will not be used for longest period of time
  • Cannot be read due the inability to read the future
  • Used for measuring the algorithms performance

LRU (Least Recently Used) Algorithm

  • Replace page that needs the most of time
  • Associates time of last use with each page
  • Better than FIFO, but worse than OPT
  • Generally used due the good algorithm

LRU (Least Recently Used) Algorithm Implementation

  • Every entry has a counter, that copies the clock into each page the is referenced through copy each page
  • Stack Implementation keeps stack of page #s in double linked list, allows each top to be moved
  • But needs more pointers and is each update is more expensive with less searching

LRU and OPT

  • Are cases of the stack algorithm that doesn't have Belady's Anomaly
  • Stack records the most recent page references

LRU Approximation Algorithms

  • LRU is expensive with its hardware
  • Each page associates a bit by a = 0 initially
  • Page switches, then switches = 1
  • Reference will then equal 0

Second-Chance Algorithms

  • Provides FIFO, with a hardware reference bit through clock replacement
  • If a page has to be replaced
  • Reference bit = 0 > replacement
  • Reference bit = 1, then the statement will leave page in memory and replace it

Enhanced Second Chance Algorithms

  • Algorithm improves by using its reference bit and modifying it in available concert
  • Take reference, modify and the relationship
  • Replace page in the lowest non-empty class, need to circular queue several times

Counting Algorithms

  • Keeps the counter of each number of references made to a page
  • This is not common
  • Replaces the page with the smallest count
  • The most frequent argument that the with the smallest count has just been brought in

Page-Buffering Algorithms

  • Keep a of free frames, available when needed, evicting to the victim from the pool
  • Possibly keep the list od modified pages
  • When backing store, write pages and set non-dirty

Applications and Page Replacement

  • Different all the OS algorithms are guess about the OS access each OS guesses
  • Some apps have better knowledge, especially on the database
  • Memory-intensive has double buffering
  • System gives direct the disk from apps and will by-pass buffering

Allocation of Frames to processes

  • Each process has to meet a standard number of frames
  • Allocate frames to each process (fixed allocation)
  • Allocate based on the size of process (proportional allocation)
  • Dynamic has to deal with multiprogramming

Global vs Local Allocation

  • Global allows the ability to another the process or set of allocation
  • Local each process selects it with allocates and utilized memory

Reclaiming Pages

  • Is policy of for the global pace
  • Memory satisfied from free-frame list, before the is begin to select
  • A strategy made to trigger when list drops, making there is always free space

Nun-Uniform Memory Access

  • Assumed all access memory is equal
  • Varying speeds of accessed memory
  • Boards containing CPUs memory, connected through system bus

NUCA cont.

  • Optimize with memory located "close to" threads
  • And modifying, will make threads to connect with low-latency groups
  • Allocates all the process thread and all the memory inside of the Igroup

Thrashing

  • Low CPU is cause, and makes system think degree needs multiprogramming
  • Other processes have to be in system
  • If process doesn't have "enough" high and rate is very high
  • Replace its frame back
  • It to page will and the existing frame

Demand Paging and Thrashing

  • Why work, process from One with either crossing
  • Process, and if smaller than program means will thrashing

Working Set

  • Working rate, fixes and example instructions Working sets change over time Peaks and valleys over time.
  • Working list, helps suspend with a list of process

Keeping Track of Working Set

  • Accurate with timer + is timer goes on for page
  • While has 2 bits for page, copy and set from 1
  • Why not completely accurate?
  • If can't use timer, bits and increase timer

Page-Fault Frequency

  • With direct approach through local process
  • It page loses, is low and gains if high

Allocating Kernel Memory

  • Treated well is to use a free memory
  • Needs of memory are contagious and a certain size
  • Through I/O devices

Other considerations-Prepaging

  • Reduce the amount of of large pages that happen when the process starts
  • All processes go through some needs before reference
  • Assume some of the pages have wasted resources
  • Is the code to save correct and the amount of wrong processes?
  • Near 0, processes will lose

Page size

  • OS have for built systems
  • must have
    • The correct table size
    • TLB size for affectively
    • I/O for overhead
  • Number of pages to show
  • 2 numbers from -to
  • On, avg growing over time

TLB Reach

  • From amount to the TLB with both sizing
  • Set a work table for both process the TLB
  • Will degree to add to faults
  • Page sizes may, increase and require large increases
  • Multiple to use page sides for an advantage with less fragmentation

Program Structure

  • Structure data, through out Each section store on page
  • Program 1 needs 128 data page
  • Program 2 runs all the needed programs

//O Interlock

  • Pages must be locked into the memory space from to time
  • device must be locked from being selected by a
  • page replacement algorithm
  • Pinning of pages to locked into the memory

Priority Allocation

  • Assign to numbers in and a size
  • Assigns to the generate and the processes
  • assign with lower programs
  • assign with lower names

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

Chapter 10: Virtual Memory PDF

More Like This

Virtual Memory Management Quiz
5 questions
Mastering Demand Paging
3 questions
Demand Paging Memory Allocation
18 questions
Use Quizgecko on...
Browser
Browser