Introduction to Cache Memory PDF
Document Details
Uploaded by EarnestWhale
MIT
Raymond M. Cruz
Tags
Related
- Operating Systems Memory Management PDF
- Computer Architecture II - Advanced Concepts 3
- Computer Organisation & Architecture Lecture 2: Computer Function and Cache Memory PDF
- Cache Memory PDF
- Computer Architecture - BIC10503 - Memory Organization PDF
- Computer Organisation and Architecture 8e - Cache Memory PDF
Summary
This presentation introduces the concept of cache memory, explaining its purpose, different levels (L1, L2, L3), mapping techniques (direct mapped, fully associative, set-associative), and replacement policies (LRU, FIFO, random). It highlights how cache memory improves computer system performance by storing frequently accessed data closer to the CPU.
Full Transcript
Introduction to Cache Memory Raymond M. Cruz, MIT Cache Memory Cache memory is a small, high-speed memory located close to the CPU. Its primary purpose is to store copies of frequently accessed data from the main memory (RAM) so that the CPU can retrieve this data more quickly than if...
Introduction to Cache Memory Raymond M. Cruz, MIT Cache Memory Cache memory is a small, high-speed memory located close to the CPU. Its primary purpose is to store copies of frequently accessed data from the main memory (RAM) so that the CPU can retrieve this data more quickly than if it had to access the slower main memory. Cache Memory a. Purpose of Cache Memory: Speed: Cache memory is much faster than main memory, allowing for quicker access to data that the CPU needs frequently. Efficiency: By storing frequently accessed data, cache reduces the average time required to access memory, thus speeding up the overall performance of the computer system. Hierarchy and Types of Cache Memory Caches are typically organized in a hierarchy to balance speed, size, and cost. The closer the cache is to the CPU, the faster it operates. a. Levels of Cache: L1 Cache (Level 1): Location: Integrated directly into the CPU chip. Speed: The fastest type of cache, operating at the speed of the CPU core. Size: Typically very small, ranging from 16 KB to 64 KB. Purpose: Stores very frequently accessed data and instructions. It is usually divided into separate instruction and data caches. Hierarchy and Types of Cache Memory L2 Cache (Level 2): Location: Can be integrated into the CPU or located on a separate chip close to the CPU. Speed: Slower than L1 but faster than L3 cache and main memory. Size: Larger than L1, typically ranging from 256 KB to a few MB. Purpose: Serves as an intermediate store, holding data and instructions that are not in the L1 cache but are likely to be accessed soon. Hierarchy and Types of Cache Memory L3 Cache (Level 3): Location: Often shared among multiple CPU cores on the same chip. Speed: Slower than L1 and L2, but still faster than main memory. Size: Larger than L2, typically ranging from a few MB to tens of MB. Purpose: Acts as a larger reservoir of data that might be needed by the CPU, improving performance in multi- core processors by sharing data between cores. Hierarchy and Types of Cache Memory b. Unified vs. Split Cache: Unified Cache: A single cache that stores both instructions and data. Split Cache: Separate caches for instructions (I-cache) and data (D-cache), typically seen in L1 cache to optimize the retrieval process. Operation of Cache Memory The operation of cache memory involves several key concepts: a. Cache Hit and Cache Miss: Cache Hit: Occurs when the CPU finds the required data or instruction in the cache, allowing for immediate access and processing. Cache Miss: Occurs when the data or instruction is not found in the cache, requiring the CPU to fetch it from the slower main memory. Operation of Cache Memory b. Cache Mapping Techniques: Cache mapping determines how data from main memory is placed in the cache. 1.Direct Mapped Cache: 1.Structure: Each block of main memory maps to exactly one location in the cache. 2.Advantages: Simple and fast to implement. 3.Disadvantages: Prone to conflicts, as multiple blocks may compete for the same cache location, leading to more cache miss Operation of Cache Memory Fully Associative Cache: Structure: Any block of main memory can be placed in any location in the cache. Advantages: No conflicts, leading to fewer cache misses. Disadvantages: More complex and expensive to implement due to the need for associative searching. Operation of Cache Memory Set-Associative Cache: Structure: A compromise between direct-mapped and fully associative caches. The cache is divided into sets, and each block of memory maps to any location within a specific set. Advantages: Balances the speed of direct-mapped cache with the flexibility of fully associative cache. Disadvantages: More complex than direct-mapped cache but simpler than fully associative cache. Cache Replacement Policies When a cache miss occurs and the cache is full, the system must decide which existing data to replace with the new data. This decision is governed by cache replacement policies. a. Common Replacement Policies: 1.Least Recently Used (LRU): 1.Concept: Replaces the block that has not been used for the longest time. 2.Advantages: Effectively removes data that is less likely to be needed soon. 3.Disadvantages: Requires tracking the usage history of all blocks, which can be complex. Cache Replacement Policies First-In, First-Out (FIFO): Concept: Replaces the oldest block in the cache, regardless of how often it has been accessed. Advantages: Simple to implement. Disadvantages: Does not consider how frequently or recently the data has been used, which can lead to inefficiencies. Cache Replacement Policies Random Replacement: Concept: Randomly selects a block to replace. Advantages: Simple and fast. Disadvantages: May lead to suboptimal performance, as it does not take usage patterns into account. Least Frequently Used (LFU): Concept: Replaces the block that has been accessed the fewest times. Advantages: Keeps frequently accessed data in the cache longer. Disadvantages: Requires counting accesses, which adds complexity.