Podcast
Questions and Answers
Which of the following is the most accurate description of cache memory?
Which of the following is the most accurate description of cache memory?
- Large, slow, cheap, dynamic RAM
- Small, fast, expensive, static RAM (correct)
- Small, slow, cheap, dynamic RAM
- Large, fast, expensive, static RAM
What defines a 'cache block' in the context of cache memory?
What defines a 'cache block' in the context of cache memory?
- A contiguous unit of cache storage, typically containing multiple memory words. (correct)
- The smallest addressable unit of memory in the system.
- A single bit used to indicate whether a cache line is valid.
- The tag associated with a cache line, identifying the memory it contains.
Which statement accurately describes the 'temporal locality' principle in caching?
Which statement accurately describes the 'temporal locality' principle in caching?
- Programs tend to reference widely scattered memory locations.
- Programs never access the same memory location more than once.
- Programs tend to reference memory locations randomly.
- Programs tend to reference the same memory locations multiple times over a short period. (correct)
Which scenario exemplifies spatial locality?
Which scenario exemplifies spatial locality?
In a cache read operation, what happens when the requested data is not found in the cache?
In a cache read operation, what happens when the requested data is not found in the cache?
What is the primary difference between 'write-through' and 'write-back' cache policies?
What is the primary difference between 'write-through' and 'write-back' cache policies?
Which memory level in the memory hierarchy typically has the lowest latency (fastest access time)?
Which memory level in the memory hierarchy typically has the lowest latency (fastest access time)?
Which is a characteristic of a small cache?
Which is a characteristic of a small cache?
What is 'thrashing' in the context of cache memory?
What is 'thrashing' in the context of cache memory?
Which best describes a 'working set'?
Which best describes a 'working set'?
What is the resident set?
What is the resident set?
Which of the following is NOT a common mechanism for cache lookup?
Which of the following is NOT a common mechanism for cache lookup?
In a 'fully associative' cache, where can a memory address be stored?
In a 'fully associative' cache, where can a memory address be stored?
Which is a downside of a fully associative cache?
Which is a downside of a fully associative cache?
What is the key characteristic of a 'direct mapped' cache?
What is the key characteristic of a 'direct mapped' cache?
What is the primary advantage of a direct-mapped cache?
What is the primary advantage of a direct-mapped cache?
How does a 'set associative' cache combine the benefits of direct-mapped (DM) and fully associative (FA) caches?
How does a 'set associative' cache combine the benefits of direct-mapped (DM) and fully associative (FA) caches?
What is the purpose of the 'dirty bit' in a page table entry?
What is the purpose of the 'dirty bit' in a page table entry?
Why is it important for the OS kernel to be able to reset bookkeeping bits in page tables?
Why is it important for the OS kernel to be able to reset bookkeeping bits in page tables?
How does keeping dirty/use bits in the core map on MIPS simplify memory management?
How does keeping dirty/use bits in the core map on MIPS simplify memory management?
What is the purpose of a cache replacement policy?
What is the purpose of a cache replacement policy?
Which is NOT a goal of cache replacement policies?
Which is NOT a goal of cache replacement policies?
Under a FIFO cache replacement policy, which entry is replaced on a cache miss?
Under a FIFO cache replacement policy, which entry is replaced on a cache miss?
What is a potential problem with the FIFO cache replacement policy?
What is a potential problem with the FIFO cache replacement policy?
How does the LRU (Least Recently Used) cache replacement policy work?
How does the LRU (Least Recently Used) cache replacement policy work?
In the context of cache replacement policies, what is LFU designed to do?
In the context of cache replacement policies, what is LFU designed to do?
What is Belady's Anomaly?
What is Belady's Anomaly?
How does the Clock algorithm approximate LRU replacement?
How does the Clock algorithm approximate LRU replacement?
What is the purpose of the 'Nth chance' modification to the Not Recently Used (NRU) page replacement algorithm?
What is the purpose of the 'Nth chance' modification to the Not Recently Used (NRU) page replacement algorithm?
What is page coloring primarily designed to address?
What is page coloring primarily designed to address?
How does the operating system use page coloring to improve cache performance?
How does the operating system use page coloring to improve cache performance?
Which is NOT an example of Zipf's Law?
Which is NOT an example of Zipf's Law?
What does the Zipf distribution suggest about caching behavior in many systems?
What does the Zipf distribution suggest about caching behavior in many systems?
According to the Zipf distribution, what is the relationship between the popularity of an item and its rank?
According to the Zipf distribution, what is the relationship between the popularity of an item and its rank?
Why is the Zipf distribution relevant to cache design?
Why is the Zipf distribution relevant to cache design?
What is the impact of memory access patterns that conform to a Zipf distribution on caching strategies?
What is the impact of memory access patterns that conform to a Zipf distribution on caching strategies?
Flashcards
Cache
Cache
Small, fast, expensive static RAM used for quick data access.
Cache Block
Cache Block
The fundamental unit of storage in a cache, containing a small number of memory words.
Cache Hit
Cache Hit
Accessing the requested data in the cache.
Cache Miss
Cache Miss
Signup and view all the flashcards
Temporal Locality
Temporal Locality
Signup and view all the flashcards
Spatial Locality
Spatial Locality
Signup and view all the flashcards
Fully Associative Cache
Fully Associative Cache
Signup and view all the flashcards
Direct Mapped Cache
Direct Mapped Cache
Signup and view all the flashcards
Set Associative Cache
Set Associative Cache
Signup and view all the flashcards
Write-through
Write-through
Signup and view all the flashcards
Write-back
Write-back
Signup and view all the flashcards
Working Set
Working Set
Signup and view all the flashcards
Resident Set
Resident Set
Signup and view all the flashcards
Cache Replacement Policy
Cache Replacement Policy
Signup and view all the flashcards
Random Replacement
Random Replacement
Signup and view all the flashcards
FIFO
FIFO
Signup and view all the flashcards
LRU (Least Recently Used)
LRU (Least Recently Used)
Signup and view all the flashcards
LFU (Least Frequently Used)
LFU (Least Frequently Used)
Signup and view all the flashcards
Belady's Anomaly
Belady's Anomaly
Signup and view all the flashcards
Clock Algorithm
Clock Algorithm
Signup and view all the flashcards
Nth Chance Algorithm
Nth Chance Algorithm
Signup and view all the flashcards
Page Coloring
Page Coloring
Signup and view all the flashcards
Zipf Distribution
Zipf Distribution
Signup and view all the flashcards
Study Notes
Definitions
- Cache defined as small, fast, expensive, static RAM
- Cache holds copy of data that is faster to access than the original
- Cache hit occurs if cache has copy data
- Cache miss occurs if cache does not have copy
- Cache block is unit of cache storage
- Cache block includes 4-64 memory words in hardware memory
- Cache block can include powers of 2 of hardware page size in file system
Cache Usefulness
- For cache to be useful cost from cache retrieval must be less than fetching from memory
- Likelihood of cache hit must be high enough
- Sources of predictability include temporal and spatial locality
Temporal and Spatial Locality
- Temporal locality describes when programs reference the same memory locations multiple times
- An example of temporal locality is instructions in a loop incrementing counter variable, frequent accessing of a data structure
- Spatial locality describes when programs reference nearby memory locations
- An example of Spatial locality is accessing/updating elements of an array
Cache Concepts
- Write-through cache changes sent immediately to next level of storage
- Write-back cache changes stored until cache block is replaced
Memory Hierarchy
- Processors have different numbers and sizes of caches
- Caches can be inclusive or exclusive
- The x486 processor has L1 on the chip and L2 on the motherboard
Cache and Byte Scaling
- Kilo is 10^3, Kibi is 2^10
- Mega is 10^6, Mebi is 2^20
- Giga is 10^9, Gibi is 2^30
- Terra is 10^12, Tebi is 2^40
- Peta is 10^15, Pebi is 2^50
- Exabyte is 10^18, Exbi is 2^60
- Zetta is 10^21, Zebi is 2^70
- Yotta is 10^24, Yobi is 2^80
- IEC refers to International Electrotechnical Commission
Cache Sizing
- Key factors to consider when sizing cache are cost and speed
- Cache hit rate is a function of cache size
- Small cache results in small hit rate, large cache results in high hit rate
- Thrashing is a possibility
- Working set is a portion of the process needed for execution
- Good performance and cache hit rate are key for success
- Resident set is what is actually in memory
- Bursty cache miss rates can occur on process switch and during execution of a process
Cache Lookup
- Cache contains memory address and value, stores block of memory, not just one memory word
- Intel has 64byte chunks
- Three common Cache Lookup mechanisms include Fully associative, Set Associative, and Direct Mapped
Fully Associative Cache
- With fully associative cache address can be stored anywhere in the cache
- Requires checking all entries
- Upon match returns the value
- Otherwise cache miss results and have to go fetch from main memory
Direct Mapped Cache
- Each address can be stored at only one location in direct mapped cache
- Hashing of address to entry to lookup performed
- Direct mapped typically enables faster lookups
- Direct mapped not flexible, need to consider what happens if 2 addresses hash to the same location
Set Associative Cache
- Set associative combines benefits of both direct mapped and fully associative caches
- Set associative slower lookup than direct mapped, but has flexibility of fully associative cache
- k-set associative has k cache replicas
- All replicas in k-set associative lookup performed in parallel
- Hit occurs if found in any of the k replicas
Page modification checks
- Page table entries contain bookkeeping bits
- Dirty bit indicates if page has been modified
- Dirty bit set by hardware on first store instruction and in both TLB and page table entry
- Use bit shows if page has been recently used
- Use bit set by hardware in page table entry on TLB miss
- Bookkeeping bits can be reset by the OS kernel
- Includes when changes to page are flushed to disk
- It also assists to track whether page is recently used
Terminology
- Modified, changed and dirty are the same
- Recently used, used, accessed, and referenced are the same
Virtual/Physical Dirty/Use Bits
- Most machines keep dirty/use bits in the page table
- Physical page is modified if any page table entry that points to it is modified
- Physical page recently used if any page table entry that points to it is recently used
- On MIPS, simpler to keep dirty/use bits in the core map
- Core map is a map of physical page frames
Cache Replacement Policy
- With cache miss, determine which entry to replace
- Assuming the new entry is more likely to be used in the near future
- In direct mapped caches, determining which entry to replace is not an issue
- Policy goal is to reduce cache misses and improve overall performance
- Policy goal helps reduce likelihood of poor performance and promotes efficient use of cache
Simple Replacement Policies
- Random replacement replaces a random entry in cache
- Random replacement can be unpredictable
- FIFO replaces the entry that has been in the cache the longest time
- A worst case occurs for FIFO is if program strides through memory that is larger than the cache
LRU and LFU
- Least Recently Used (LRU) uses principle of temporal locality
- LRU replaces entry that has not been used for the longest time in the past
- Least Frequently Used (LFU) replaces the cache entry used the least often in the recent past
Belady’s Anomaly
- Belady's anomaly describes the phenomenon in which increasing the number of page frames results in an increase in the number of page faults for certain memory access patterns
- This phenomenon is commonly experienced when using the first-in first-out page replacement algorithm
Clock Algorithm
- Clock algorithm can estimate LRU
- This estimates through periodically, sweeping through all pages
- If page is unused it is reclaimed
- If page is used, then mark as unused
Nth Chance Algorithm
- Nth Chance Algorithm stands for Not Recently Used
- Instead of one bit per page, keep an integer
- Integer counts number of sweeps since last use as notInUseSince
- Periodically sweep through all page frames and if page is used, notInUseSince is set to 0
- However, If page is not used it increments
- If notInUseSince reaches N then reclaim the page
Page Coloring
- Page coloring is relevant when the cache size is much larger than page size
- Page coloring can be direct mapped or set associative
- Multiple pages map to the same cache line
- OS page assignment matters
- Example: 8MB cache and 4KB pages
- In this example, 1 of every 2K pages lands in same place in cache
Zipf Distribution
- Caching behavior of many systems is not well characterized by the working set model
- An alternative is the Zipf distribution
- In Zipf, Popularity ~ 1/ka, for kth most popular item, with 1 < α < 2
- Examples of Zipf models include web pages, movies, library books, words in text, salaries, and city populations
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.