Understanding Cache Memory

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

Which of the following is the most accurate description of cache memory?

  • Large, slow, cheap, dynamic RAM
  • Small, fast, expensive, static RAM (correct)
  • Small, slow, cheap, dynamic RAM
  • Large, fast, expensive, static RAM

What defines a 'cache block' in the context of cache memory?

  • A contiguous unit of cache storage, typically containing multiple memory words. (correct)
  • The smallest addressable unit of memory in the system.
  • A single bit used to indicate whether a cache line is valid.
  • The tag associated with a cache line, identifying the memory it contains.

Which statement accurately describes the 'temporal locality' principle in caching?

  • Programs tend to reference widely scattered memory locations.
  • Programs never access the same memory location more than once.
  • Programs tend to reference memory locations randomly.
  • Programs tend to reference the same memory locations multiple times over a short period. (correct)

Which scenario exemplifies spatial locality?

<p>Accessing or updating elements of an array sequentially. (D)</p> Signup and view all the answers

In a cache read operation, what happens when the requested data is not found in the cache?

<p>A 'cache miss' occurs, and the data is retrieved from main memory. (B)</p> Signup and view all the answers

What is the primary difference between 'write-through' and 'write-back' cache policies?

<p>Write-through updates main memory immediately, while write-back updates main memory only when the cache block is replaced. (A)</p> Signup and view all the answers

Which memory level in the memory hierarchy typically has the lowest latency (fastest access time)?

<p>1st level cache/first level TLB (A)</p> Signup and view all the answers

Which is a characteristic of a small cache?

<p>Small hit rate (B)</p> Signup and view all the answers

What is 'thrashing' in the context of cache memory?

<p>Excessive swapping of data between cache and main memory, leading to poor performance. (B)</p> Signup and view all the answers

Which best describes a 'working set'?

<p>The portion of a process that must be in memory for efficient execution. (A)</p> Signup and view all the answers

What is the resident set?

<p>The memory that is actively running (A)</p> Signup and view all the answers

Which of the following is NOT a common mechanism for cache lookup?

<p>Indirect Addressing (A)</p> Signup and view all the answers

In a 'fully associative' cache, where can a memory address be stored?

<p>Anywhere in the cache (A)</p> Signup and view all the answers

Which is a downside of a fully associative cache?

<p>Slower lookups (C)</p> Signup and view all the answers

What is the key characteristic of a 'direct mapped' cache?

<p>Each memory address can be stored at only one specific location in the cache. (D)</p> Signup and view all the answers

What is the primary advantage of a direct-mapped cache?

<p>Faster Lookups (C)</p> Signup and view all the answers

How does a 'set associative' cache combine the benefits of direct-mapped (DM) and fully associative (FA) caches?

<p>It divides the cache into sets, where each set operates like a small FA cache, combining speed and flexibility. (C)</p> Signup and view all the answers

What is the purpose of the 'dirty bit' in a page table entry?

<p>To indicate whether the page has been modified since it was loaded into memory. (D)</p> Signup and view all the answers

Why is it important for the OS kernel to be able to reset bookkeeping bits in page tables?

<p>To manage cache replacement policies and track page usage. (D)</p> Signup and view all the answers

How does keeping dirty/use bits in the core map on MIPS simplify memory management?

<p>It centralizes the tracking of page modifications and usage, simplifying memory management algorithms. (C)</p> Signup and view all the answers

What is the purpose of a cache replacement policy?

<p>To decide which cache entry to replace when a new entry needs to be added. (C)</p> Signup and view all the answers

Which is NOT a goal of cache replacement policies?

<p>Reduce cache size (A)</p> Signup and view all the answers

Under a FIFO cache replacement policy, which entry is replaced on a cache miss?

<p>The entry that has been in the cache the longest. (A)</p> Signup and view all the answers

What is a potential problem with the FIFO cache replacement policy?

<p>It can lead to 'Belady's anomaly,' where increasing cache size increases the miss rate. (B)</p> Signup and view all the answers

How does the LRU (Least Recently Used) cache replacement policy work?

<p>It replaces the entry that has not been used for the longest time. (A)</p> Signup and view all the answers

In the context of cache replacement policies, what is LFU designed to do?

<p>Replace the entry used the least often (C)</p> Signup and view all the answers

What is Belady's Anomaly?

<p>When increasing the the number of frames leads to an increase in page faults (B)</p> Signup and view all the answers

How does the Clock algorithm approximate LRU replacement?

<p>By periodically sweeping through all pages and marking them as unused. (D)</p> Signup and view all the answers

What is the purpose of the 'Nth chance' modification to the Not Recently Used (NRU) page replacement algorithm?

<p>To give pages multiple opportunities to be used before reclaiming them. (C)</p> Signup and view all the answers

What is page coloring primarily designed to address?

<p>Minimizing the impact of large cache sizes when many pages map to the same cache line. (A)</p> Signup and view all the answers

How does the operating system use page coloring to improve cache performance?

<p>By strategically assigning physical page frames to minimize cache contention. (B)</p> Signup and view all the answers

Which is NOT an example of Zipf's Law?

<p>Hash tables (D)</p> Signup and view all the answers

What does the Zipf distribution suggest about caching behavior in many systems?

<p>A small number of items are accessed much more frequently than others. (C)</p> Signup and view all the answers

According to the Zipf distribution, what is the relationship between the popularity of an item and its rank?

<p>Popularity is inversely proportional to the rank raised to a power between 1 and 2. (C)</p> Signup and view all the answers

Why is the Zipf distribution relevant to cache design?

<p>It implies that optimizing cache performance for the most popular items can yield significant benefits. (B)</p> Signup and view all the answers

What is the impact of memory access patterns that conform to a Zipf distribution on caching strategies?

<p>Caching strategies should focus on maximizing hit rates for the most popular items. (B)</p> Signup and view all the answers

Flashcards

Cache

Small, fast, expensive static RAM used for quick data access.

Cache Block

The fundamental unit of storage in a cache, containing a small number of memory words.

Cache Hit

Accessing the requested data in the cache.

Cache Miss

Failing to find the requested data in the cache, requiring access to main memory.

Signup and view all the flashcards

Temporal Locality

Programs tend to reference the same memory locations multiple times.

Signup and view all the flashcards

Spatial Locality

Programs tend to reference memory locations that are near each other.

Signup and view all the flashcards

Fully Associative Cache

A cache design where any memory address can be stored at any location in the cache.

Signup and view all the flashcards

Direct Mapped Cache

A cache design where each memory address can only be stored at one specific location in the cache.

Signup and view all the flashcards

Set Associative Cache

A cache design that combines aspects of direct mapped and fully associative caches, using sets of cache locations.

Signup and view all the flashcards

Write-through

Changes sent to the next level of memory immediately on write.

Signup and view all the flashcards

Write-back

Changes stored in cache and written to the next memory level when the cache block is replaced.

Signup and view all the flashcards

Working Set

The portion of a process that needs to be in memory for efficient execution.

Signup and view all the flashcards

Resident Set

The portion of a process that is actually in physical memory.

Signup and view all the flashcards

Cache Replacement Policy

The process of selecting which entry to replace in the cache on a cache miss.

Signup and view all the flashcards

Random Replacement

Replacing a random entry in the cache.

Signup and view all the flashcards

FIFO

Replacing the entry that has been in the cache for the longest time.

Signup and view all the flashcards

LRU (Least Recently Used)

Replacing the entry that has not been used for the longest time.

Signup and view all the flashcards

LFU (Least Frequently Used)

Replacing the cache entry used the least often in the recent past.

Signup and view all the flashcards

Belady's Anomaly

An anomaly where increasing the number of page frames increases the number of page faults under the FIFO algorithm.

Signup and view all the flashcards

Clock Algorithm

An algorithm to estimate LRU by periodically sweeping through all pages.

Signup and view all the flashcards

Nth Chance Algorithm

Uses an integer to track sweeps since the last use, as opposed to one bit per page.

Signup and view all the flashcards

Page Coloring

Mapping multiple pages to the same cache line.

Signup and view all the flashcards

Zipf Distribution

The popularity of items follows a power law distribution.

Signup and view all the flashcards

Study Notes

Definitions

  • Cache defined as small, fast, expensive, static RAM
  • Cache holds copy of data that is faster to access than the original
  • Cache hit occurs if cache has copy data
  • Cache miss occurs if cache does not have copy
  • Cache block is unit of cache storage
  • Cache block includes 4-64 memory words in hardware memory
  • Cache block can include powers of 2 of hardware page size in file system

Cache Usefulness

  • For cache to be useful cost from cache retrieval must be less than fetching from memory
  • Likelihood of cache hit must be high enough
  • Sources of predictability include temporal and spatial locality

Temporal and Spatial Locality

  • Temporal locality describes when programs reference the same memory locations multiple times
  • An example of temporal locality is instructions in a loop incrementing counter variable, frequent accessing of a data structure
  • Spatial locality describes when programs reference nearby memory locations
  • An example of Spatial locality is accessing/updating elements of an array

Cache Concepts

  • Write-through cache changes sent immediately to next level of storage
  • Write-back cache changes stored until cache block is replaced

Memory Hierarchy

  • Processors have different numbers and sizes of caches
  • Caches can be inclusive or exclusive
  • The x486 processor has L1 on the chip and L2 on the motherboard

Cache and Byte Scaling

  • Kilo is 10^3, Kibi is 2^10
  • Mega is 10^6, Mebi is 2^20
  • Giga is 10^9, Gibi is 2^30
  • Terra is 10^12, Tebi is 2^40
  • Peta is 10^15, Pebi is 2^50
  • Exabyte is 10^18, Exbi is 2^60
  • Zetta is 10^21, Zebi is 2^70
  • Yotta is 10^24, Yobi is 2^80
  • IEC refers to International Electrotechnical Commission

Cache Sizing

  • Key factors to consider when sizing cache are cost and speed
  • Cache hit rate is a function of cache size
  • Small cache results in small hit rate, large cache results in high hit rate
  • Thrashing is a possibility
  • Working set is a portion of the process needed for execution
  • Good performance and cache hit rate are key for success
  • Resident set is what is actually in memory
  • Bursty cache miss rates can occur on process switch and during execution of a process

Cache Lookup

  • Cache contains memory address and value, stores block of memory, not just one memory word
  • Intel has 64byte chunks
  • Three common Cache Lookup mechanisms include Fully associative, Set Associative, and Direct Mapped

Fully Associative Cache

  • With fully associative cache address can be stored anywhere in the cache
  • Requires checking all entries
  • Upon match returns the value
  • Otherwise cache miss results and have to go fetch from main memory

Direct Mapped Cache

  • Each address can be stored at only one location in direct mapped cache
  • Hashing of address to entry to lookup performed
  • Direct mapped typically enables faster lookups
  • Direct mapped not flexible, need to consider what happens if 2 addresses hash to the same location

Set Associative Cache

  • Set associative combines benefits of both direct mapped and fully associative caches
  • Set associative slower lookup than direct mapped, but has flexibility of fully associative cache
  • k-set associative has k cache replicas
  • All replicas in k-set associative lookup performed in parallel
  • Hit occurs if found in any of the k replicas

Page modification checks

  • Page table entries contain bookkeeping bits
  • Dirty bit indicates if page has been modified
  • Dirty bit set by hardware on first store instruction and in both TLB and page table entry
  • Use bit shows if page has been recently used
  • Use bit set by hardware in page table entry on TLB miss
  • Bookkeeping bits can be reset by the OS kernel
  • Includes when changes to page are flushed to disk
  • It also assists to track whether page is recently used

Terminology

  • Modified, changed and dirty are the same
  • Recently used, used, accessed, and referenced are the same

Virtual/Physical Dirty/Use Bits

  • Most machines keep dirty/use bits in the page table
  • Physical page is modified if any page table entry that points to it is modified
  • Physical page recently used if any page table entry that points to it is recently used
  • On MIPS, simpler to keep dirty/use bits in the core map
  • Core map is a map of physical page frames

Cache Replacement Policy

  • With cache miss, determine which entry to replace
  • Assuming the new entry is more likely to be used in the near future
  • In direct mapped caches, determining which entry to replace is not an issue
  • Policy goal is to reduce cache misses and improve overall performance
  • Policy goal helps reduce likelihood of poor performance and promotes efficient use of cache

Simple Replacement Policies

  • Random replacement replaces a random entry in cache
  • Random replacement can be unpredictable
  • FIFO replaces the entry that has been in the cache the longest time
  • A worst case occurs for FIFO is if program strides through memory that is larger than the cache

LRU and LFU

  • Least Recently Used (LRU) uses principle of temporal locality
  • LRU replaces entry that has not been used for the longest time in the past
  • Least Frequently Used (LFU) replaces the cache entry used the least often in the recent past

Belady’s Anomaly

  • Belady's anomaly describes the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns
  • This phenomenon is commonly experienced when using the first-in first-out page replacement algorithm

Clock Algorithm

  • Clock algorithm can estimate LRU
  • This estimates through periodically, sweeping through all pages
  • If page is unused it is reclaimed
  • If page is used, then mark as unused

Nth Chance Algorithm

  • Nth Chance Algorithm stands for Not Recently Used
  • Instead of one bit per page, keep an integer
  • Integer counts number of sweeps since last use as notInUseSince
  • Periodically sweep through all page frames and if page is used, notInUseSince is set to 0
  • However, If page is not used it increments
  • If notInUseSince reaches N then reclaim the page

Page Coloring

  • Page coloring is relevant when the cache size is much larger than page size
  • Page coloring can be direct mapped or set associative
  • Multiple pages map to the same cache line
  • OS page assignment matters
  • Example: 8MB cache and 4KB pages
  • In this example, 1 of every 2K pages lands in same place in cache

Zipf Distribution

  • Caching behavior of many systems is not well characterized by the working set model
  • An alternative is the Zipf distribution
  • In Zipf, Popularity ~ 1/ka, for kth most popular item, with 1 < α < 2
  • Examples of Zipf models include web pages, movies, library books, words in text, salaries, and city populations

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Cache Memory Quiz
10 questions

Cache Memory Quiz

RecordSettingValley avatar
RecordSettingValley
Cache and Virtual Memory Replacement Policies Quiz
0 questions
Cache Memory Organization Quiz
5 questions
Cache Memory and Storage Devices Quiz
18 questions
Use Quizgecko on...
Browser
Browser