Podcast
Questions and Answers
Which type of memory does not require a refresh?
Which type of memory does not require a refresh?
- DRAM
- SRAM (correct)
- ROM
- EPROM
Which type of memory has a smaller cell size, leading to higher density?
Which type of memory has a smaller cell size, leading to higher density?
- DRAM (correct)
- EEPROM
- ROM
- SRAM
Which memory type is generally used for cache memory?
Which memory type is generally used for cache memory?
- ROM
- Flash Memory
- SRAM (correct)
- DRAM
Which memory type has faster access times due to a simpler read process?
Which memory type has faster access times due to a simpler read process?
Which memory type is difficult to integrate with logic circuits due to its special IC process?
Which memory type is difficult to integrate with logic circuits due to its special IC process?
Which of the following is an advantage of using DRAM?
Which of the following is an advantage of using DRAM?
What is the main reason for using SRAM in cache memory?
What is the main reason for using SRAM in cache memory?
In a memory hierarchy, what serves as a staging area for a subset of data used by a slower device?
In a memory hierarchy, what serves as a staging area for a subset of data used by a slower device?
Which type of memory has the lowest cost per bit?
Which type of memory has the lowest cost per bit?
What is the role of cache memory in relation to the CPU and main memory?
What is the role of cache memory in relation to the CPU and main memory?
What is the primary purpose of using cache memory?
What is the primary purpose of using cache memory?
When the CPU finds the requested data in the cache memory, it is called a:
When the CPU finds the requested data in the cache memory, it is called a:
What happens when a 'cache miss' occurs?
What happens when a 'cache miss' occurs?
Which memory is directly accessed by the CPU for storing and executing instructions?
Which memory is directly accessed by the CPU for storing and executing instructions?
What is the purpose of keeping a copy of data from main memory in the cache?
What is the purpose of keeping a copy of data from main memory in the cache?
If data is not found in the cache and must be fetched from main memory, what is this event called?
If data is not found in the cache and must be fetched from main memory, what is this event called?
What is the term for ensuring that the data in cache is the most up-to-date?
What is the term for ensuring that the data in cache is the most up-to-date?
Which of the following is a type of cache organization?
Which of the following is a type of cache organization?
What is the purpose of a cache directory?
What is the purpose of a cache directory?
Higher levels in the memory hierarchy are generally:
Higher levels in the memory hierarchy are generally:
Which method updates main memory at a convenient time?
Which method updates main memory at a convenient time?
What does LRU stand for in the context of cache replacement policies?
What does LRU stand for in the context of cache replacement policies?
Which memory technology is characterized by its need for periodic refresh?
Which memory technology is characterized by its need for periodic refresh?
Which memory technology generally requires more transistors per bit?
Which memory technology generally requires more transistors per bit?
Which level of the memory hierarchy is the fastest and most expensive per byte?
Which level of the memory hierarchy is the fastest and most expensive per byte?
Flashcards
SRAM
SRAM
Memory that uses transistors to store data; faster but less dense and more expensive than DRAM.
DRAM
DRAM
Memory that uses capacitors to store data; slower but denser and cheaper than SRAM; requires periodic refresh.
SRAM vs DRAM: Speed
SRAM vs DRAM: Speed
SRAM is faster, so has faster access times.
SRAM vs DRAM: Cost
SRAM vs DRAM: Cost
Signup and view all the flashcards
SRAM: Typical use case
SRAM: Typical use case
Signup and view all the flashcards
DRAM: Typical Use Case
DRAM: Typical Use Case
Signup and view all the flashcards
Cache Memory
Cache Memory
Signup and view all the flashcards
Hybrid Memory System
Hybrid Memory System
Signup and view all the flashcards
Hit Rate
Hit Rate
Signup and view all the flashcards
Cache Miss
Cache Miss
Signup and view all the flashcards
Caching a loop
Caching a loop
Signup and view all the flashcards
Memory Hierarchy
Memory Hierarchy
Signup and view all the flashcards
Registers (L0)
Registers (L0)
Signup and view all the flashcards
Cache
Cache
Signup and view all the flashcards
Fully Associative Cache
Fully Associative Cache
Signup and view all the flashcards
Direct Mapped Cache
Direct Mapped Cache
Signup and view all the flashcards
Set Associative Cache
Set Associative Cache
Signup and view all the flashcards
Cache Coherency
Cache Coherency
Signup and view all the flashcards
Cache Replacement Policy
Cache Replacement Policy
Signup and view all the flashcards
Cache Fill Block Size
Cache Fill Block Size
Signup and view all the flashcards
Write-Through Cache
Write-Through Cache
Signup and view all the flashcards
Write-Back Cache
Write-Back Cache
Signup and view all the flashcards
Cache Directory
Cache Directory
Signup and view all the flashcards
Stale Data
Stale Data
Signup and view all the flashcards
Study Notes
- SRAM and DRAM are two types of memory cells
- The comparison of the two is shown below
SRAM
- Has a larger cell size resulting in lower density and higher cost per bit
- Does not require refreshing
- Provides faster access due to simpler read operations
- Benefits from a standard IC process, making it suitable for integration with logic
DRAM Cell
- Has a smaller cell size, which leads to higher density and lower cost per bit
- Requires periodic refreshing, including after a read operation
- Has a complex read process, causing longer access times
- Involves a special IC process, making it difficult to integrate with logic circuits
SRAM vs DRAM Summary
- SRAM uses 6 transistors per bit, DRAM uses 1 transistor per bit
- SRAM has an access time of 1X, DRAM has an access time of 10X
- SRAM has a cost comparison rate of 100X, DRAM has a rate of 1X
- SRAM is used in cache memories, while DRAM is used in main memories
Cache Memory
- Is a widely used memory design for high-performance CPUs
- Utilizes DRAMs for main memory alongside SRAM for cache memory
- Takes advantage of the speed of SRAM and the density/cost-effectiveness of DRAM
- Pure SRAM implementation is too expensive
- Pure DRAM degrades performance
- Is positioned between the CPU and main memory
Cache Memory Operation
- When the CPU needs data, it first checks the cache
- If the data is present in the cache, it's provided to the CPU with zero wait states
- If data is not in the cache, the memory controller transfers it from the CPU, copying it to the cache
- The cache controller knows what data is stored
- When the CPU requests data, the address is compared with the addresses in the cache controller
- If the addresses match, the data is sent to the CPU with zero wait states which is a "hit"
- If the address do not match this is a "miss" the controller fetches the data and sends to the CPU and cache for future use
- Keeping data copies allows for subsequent requests to result in hits with zero states
Caching Loops
- A loop segment of code can be fetched, executed, and placed in the cache
- The first loop's execution references code from main memory
- The routine is then copied into the cache
- Subsequent iterations use the cached instructions instead of refetching from main memory
Hit rate formula
- Hit rate = (no. of hits / no. of bus cycles) x 100%
Example 1 - Cache Operation
- A processor has an on-chip cache with a 5 ns cycle time (200 MHz)
- Accesses not found in the cache are located in main memory, which has a 60 ns access time
- The hit rate of the cache (probability of finding the desired content) is 85%
- The miss rate is, therefore, 15%
- Overall memory access time = (0.85 * 5 ns) + (0.15 * 60 ns) = 13.25 ns
Example 2 - Cache Performance
- A processor has an on-chip cache with a 5 ns cycle time (200 MHz), with a 90% hit rate
- A second-level [motherboard] cache runs in three cycles with a 70% hit rate of all accesses not found in the on-chip cache
- Accesses not found in either cache are located in main memory, which has a 60 ns access time
- Overall memory access time = (0.90 * 5 ns) + (0.07 * 15 ns) + (0.03 * 60 ns) = 7.35 ns
Memory Hierarchy
-
L0-L5
-
Registers are the fastest and costliest, CPU registers hold words retrieved form L1 cache
-
On-ship L1 caches (SRAM), L1 cache holds cache lines retrieved from the L2 cache memory
-
Off-chip L2 (SRAM), L2 Cache holds cache lines retrieved from the main memory
-
Main memory and Dram
-
Local Secondary Storage (local disk)
-
Remote Secondary Storage (web servers, distributed file systems)
Cache Details
- Cache: A smaller, faster storage device that acts as a staging area for a subset of the data in a larger, slower device
- Fundamental idea of a memory hierarchy: For each (k), the faster, smaller device at level k serves as a cache for the larger, level k + 1 device
- Programs tend to access to data at Level K more, the level K + 1 can be slower cheaper
- Net effect: A large pool of memory that costs as much as the cheap storage near the bottom, but that serves data to programs at the rateof the fast storage near the top.
Cache Organization
- There are three types of cache organization: fully associative, direct mapped, and set associative
Fully Associative Cache
- Uses a Tag Cache and Data Cache
- Tag Cache is 128x16
- Data Cache is 128x8
Direct Mapped Cache
- Uses a Tag Cache and Data Cache
- Tag Cache is 2Kx5
- Data Cache is 2Kx8 (2K bytes)
Set Associative
- Uses a Tag and Data for each set, indexed per way
- Tag is 1K x 6
- Data is 1K x 8
Direct-Mapped Example
- Tag cache = (2^18 x 6) / 8 = 192K bytes
- Data cache = (2^18 x 8) / 8 = 256K bytes
2-Way Set Associative Example
- This example uses 2-way associative cache for 16 MB of main memory
- Tag = 2[(2^17 x 7)/8] = 224K bytes
- Data = 2[(2^17 x 8)/8] = 256K bytes
4-Way Set Associative Example
- This example uses 4-way set associative cache for 16 MB of main memory.
- Tag = 4[(2^16 x 8)/8] = 256K bytes
- Data = 4[(2^16 x 8)/8] = 256K bytes
Write Through
- CPU writes to both Cache and Main memory
Write back
- CPU writes to Cache only, and Cache controller will write to main memory at a convenient time.
Cache coherency
- Important in multi-processor and DMA systems to ensure the cache always has the most recent data and is not old
- When data in main memory changes, the processor's cache must have teh copy of the latest data
- The cache uses stale data which is informed to the situation
Cache replacement policy
- This depends on the "cache replacement policy" adopted, such as LRU, FIFO etc
- Under LRU, the controller keeps account of which block of cache has used the least number of times
- The least used data will be swapped out or flushed to main memory
- Other such policies are FIFO, or randomly overwrite
Cache fill block size
- if the CPU asks and data is note in the cache, the controller must bring it from main memory, there needs to be a correct block size since the main memory is accessed normally
- Block size varies from 4-32, such as 32 bytes when used in older machines
- On-chip cache is L1
- Cache outside the CPU is L2
82395DX Cache set Entry
- Eight independent line valid bits
- Eight bit is the Tag Valid Bit
- Bits 9 through 25 are 17 bit tag
- 17 bit identifies the page number of one of the 131,07 pages in main memory
- The Tag
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.
Related Documents
Description
Compare SRAM (Static RAM) and DRAM (Dynamic RAM) memory cells. SRAM offers faster access but lower density, while DRAM provides higher density at a lower cost but requires refreshing. This contrast makes SRAM suitable for cache memory and DRAM ideal for main memory.