SRAM vs DRAM: Memory Cell Comparison
25 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

Which type of memory does not require a refresh?

  • DRAM
  • SRAM (correct)
  • ROM
  • EPROM

Which type of memory has a smaller cell size, leading to higher density?

  • DRAM (correct)
  • EEPROM
  • ROM
  • SRAM

Which memory type is generally used for cache memory?

  • ROM
  • Flash Memory
  • SRAM (correct)
  • DRAM

Which memory type has faster access times due to a simpler read process?

<p>SRAM (D)</p> Signup and view all the answers

Which memory type is difficult to integrate with logic circuits due to its special IC process?

<p>DRAM (A)</p> Signup and view all the answers

Which of the following is an advantage of using DRAM?

<p>Lower cost per bit (C)</p> Signup and view all the answers

What is the main reason for using SRAM in cache memory?

<p>Fast access (C)</p> Signup and view all the answers

In a memory hierarchy, what serves as a staging area for a subset of data used by a slower device?

<p>Cache (A)</p> Signup and view all the answers

Which type of memory has the lowest cost per bit?

<p>DRAM (C)</p> Signup and view all the answers

What is the role of cache memory in relation to the CPU and main memory?

<p>Placed between the CPU and main memory (C)</p> Signup and view all the answers

What is the primary purpose of using cache memory?

<p>Improve memory access time (B)</p> Signup and view all the answers

When the CPU finds the requested data in the cache memory, it is called a:

<p>Cache Hit (B)</p> Signup and view all the answers

What happens when a 'cache miss' occurs?

<p>The CPU retrieves data from main memory. (D)</p> Signup and view all the answers

Which memory is directly accessed by the CPU for storing and executing instructions?

<p>Cache memory (B)</p> Signup and view all the answers

What is the purpose of keeping a copy of data from main memory in the cache?

<p>To allow quicker subsequent access (C)</p> Signup and view all the answers

If data is not found in the cache and must be fetched from main memory, what is this event called?

<p>Cache Miss (A)</p> Signup and view all the answers

What is the term for ensuring that the data in cache is the most up-to-date?

<p>Cache Coherency (A)</p> Signup and view all the answers

Which of the following is a type of cache organization?

<p>Fully Associative (C)</p> Signup and view all the answers

What is the purpose of a cache directory?

<p>To keep track of which memory locations are cached (D)</p> Signup and view all the answers

Higher levels in the memory hierarchy are generally:

<p>Smaller and faster (B)</p> Signup and view all the answers

Which method updates main memory at a convenient time?

<p>Write-back (C)</p> Signup and view all the answers

What does LRU stand for in the context of cache replacement policies?

<p>Least Recently Used (B)</p> Signup and view all the answers

Which memory technology is characterized by its need for periodic refresh?

<p>DRAM (C)</p> Signup and view all the answers

Which memory technology generally requires more transistors per bit?

<p>SRAM (B)</p> Signup and view all the answers

Which level of the memory hierarchy is the fastest and most expensive per byte?

<p>Registers (D)</p> Signup and view all the answers

Flashcards

SRAM

Memory that uses transistors to store data; faster but less dense and more expensive than DRAM.

DRAM

Memory that uses capacitors to store data; slower but denser and cheaper than SRAM; requires periodic refresh.

SRAM vs DRAM: Speed

SRAM is faster, so has faster access times.

SRAM vs DRAM: Cost

DRAM is less expensive per bit.

Signup and view all the flashcards

SRAM: Typical use case

SRAM is commonly used for cache memory.

Signup and view all the flashcards

DRAM: Typical Use Case

DRAM is commonly used for main system memory.

Signup and view all the flashcards

Cache Memory

A small, fast memory that stores frequently accessed data for quicker retrieval.

Signup and view all the flashcards

Hybrid Memory System

Memory design that uses DRAM for main memory and SRAM for cache.

Signup and view all the flashcards

Hit Rate

Measure of how often requested data is found in the cache. Higher values mean better performance.

Signup and view all the flashcards

Cache Miss

A memory access where the requested data is not found in the cache, requiring access to main memory.

Signup and view all the flashcards

Caching a loop

Memory is fully copied into the cache.

Signup and view all the flashcards

Memory Hierarchy

Memory hierarchy (L0, L1, L2...) from registers (fastest, smallest) to remote storage (slowest, largest).

Signup and view all the flashcards

Registers (L0)

The fastest, smallest memory, sits at the top (L0) of the memory hierarchy.

Signup and view all the flashcards

Cache

Small, fast storage that acts as a staging area for a subset of data.

Signup and view all the flashcards

Fully Associative Cache

When a cache is designed such that any memory location can be stored in any cache location. It has the highest flexibility but is also the most complex to implement.

Signup and view all the flashcards

Direct Mapped Cache

Each memory location has one specific location in the cache where it can be stored.

Signup and view all the flashcards

Set Associative Cache

Combines aspects of fully associative and direct-mapped caches. Cache is divided into sets, and each memory location can be stored in any line within a specific set.

Signup and view all the flashcards

Cache Coherency

Ensuring cache data aligns with main memory, avoid stale data in multi-processor systems.

Signup and view all the flashcards

Cache Replacement Policy

If data isn't in cache, use this to know which data is ejected when bringing in new blocks. LRU, FIFO are possible policies.

Signup and view all the flashcards

Cache Fill Block Size

How many bytes are brought in when data is not in the cache?

Signup and view all the flashcards

Write-Through Cache

Writes occur in both cache and main memory which ensures data is accurate.

Signup and view all the flashcards

Write-Back Cache

Updates are made only in the cache, and main memory is only updated when the cache block is replaced.

Signup and view all the flashcards

Cache Directory

A directory in direct-mapped caches for main memory locations.

Signup and view all the flashcards

Stale Data

When main memory is not properly synced so has old data.

Signup and view all the flashcards

Study Notes

  • SRAM and DRAM are two types of memory cells
  • The comparison of the two is shown below

SRAM

  • Has a larger cell size resulting in lower density and higher cost per bit
  • Does not require refreshing
  • Provides faster access due to simpler read operations
  • Benefits from a standard IC process, making it suitable for integration with logic

DRAM Cell

  • Has a smaller cell size, which leads to higher density and lower cost per bit
  • Requires periodic refreshing, including after a read operation
  • Has a complex read process, causing longer access times
  • Involves a special IC process, making it difficult to integrate with logic circuits

SRAM vs DRAM Summary

  • SRAM uses 6 transistors per bit, DRAM uses 1 transistor per bit
  • SRAM has an access time of 1X, DRAM has an access time of 10X
  • SRAM has a cost comparison rate of 100X, DRAM has a rate of 1X
  • SRAM is used in cache memories, while DRAM is used in main memories

Cache Memory

  • Is a widely used memory design for high-performance CPUs
  • Utilizes DRAMs for main memory alongside SRAM for cache memory
  • Takes advantage of the speed of SRAM and the density/cost-effectiveness of DRAM
  • Pure SRAM implementation is too expensive
  • Pure DRAM degrades performance
  • Is positioned between the CPU and main memory

Cache Memory Operation

  • When the CPU needs data, it first checks the cache
  • If the data is present in the cache, it's provided to the CPU with zero wait states
  • If data is not in the cache, the memory controller transfers it from the CPU, copying it to the cache
  • The cache controller knows what data is stored
  • When the CPU requests data, the address is compared with the addresses in the cache controller
  • If the addresses match, the data is sent to the CPU with zero wait states which is a "hit"
  • If the address do not match this is a "miss" the controller fetches the data and sends to the CPU and cache for future use
  • Keeping data copies allows for subsequent requests to result in hits with zero states

Caching Loops

  • A loop segment of code can be fetched, executed, and placed in the cache
  • The first loop's execution references code from main memory
  • The routine is then copied into the cache
  • Subsequent iterations use the cached instructions instead of refetching from main memory

Hit rate formula

  • Hit rate = (no. of hits / no. of bus cycles) x 100%

Example 1 - Cache Operation

  • A processor has an on-chip cache with a 5 ns cycle time (200 MHz)
  • Accesses not found in the cache are located in main memory, which has a 60 ns access time
  • The hit rate of the cache (probability of finding the desired content) is 85%
  • The miss rate is, therefore, 15%
  • Overall memory access time = (0.85 * 5 ns) + (0.15 * 60 ns) = 13.25 ns

Example 2 - Cache Performance

  • A processor has an on-chip cache with a 5 ns cycle time (200 MHz), with a 90% hit rate
  • A second-level [motherboard] cache runs in three cycles with a 70% hit rate of all accesses not found in the on-chip cache
  • Accesses not found in either cache are located in main memory, which has a 60 ns access time
  • Overall memory access time = (0.90 * 5 ns) + (0.07 * 15 ns) + (0.03 * 60 ns) = 7.35 ns

Memory Hierarchy

  • L0-L5

  • Registers are the fastest and costliest, CPU registers hold words retrieved form L1 cache

  • On-ship L1 caches (SRAM), L1 cache holds cache lines retrieved from the L2 cache memory

  • Off-chip L2 (SRAM), L2 Cache holds cache lines retrieved from the main memory

  • Main memory and Dram

  • Local Secondary Storage (local disk)

  • Remote Secondary Storage (web servers, distributed file systems)

Cache Details

  • Cache: A smaller, faster storage device that acts as a staging area for a subset of the data in a larger, slower device
  • Fundamental idea of a memory hierarchy: For each (k), the faster, smaller device at level k serves as a cache for the larger, level k + 1 device
  • Programs tend to access to data at Level K more, the level K + 1 can be slower cheaper
  • Net effect: A large pool of memory that costs as much as the cheap storage near the bottom, but that serves data to programs at the rateof the fast storage near the top.

Cache Organization

  • There are three types of cache organization: fully associative, direct mapped, and set associative

Fully Associative Cache

  • Uses a Tag Cache and Data Cache
  • Tag Cache is 128x16
  • Data Cache is 128x8

Direct Mapped Cache

  • Uses a Tag Cache and Data Cache
  • Tag Cache is 2Kx5
  • Data Cache is 2Kx8 (2K bytes)

Set Associative

  • Uses a Tag and Data for each set, indexed per way
  • Tag is 1K x 6
  • Data is 1K x 8

Direct-Mapped Example

  • Tag cache = (2^18 x 6) / 8 = 192K bytes
  • Data cache = (2^18 x 8) / 8 = 256K bytes

2-Way Set Associative Example

  • This example uses 2-way associative cache for 16 MB of main memory
  • Tag = 2[(2^17 x 7)/8] = 224K bytes
  • Data = 2[(2^17 x 8)/8] = 256K bytes

4-Way Set Associative Example

  • This example uses 4-way set associative cache for 16 MB of main memory.
  • Tag = 4[(2^16 x 8)/8] = 256K bytes
  • Data = 4[(2^16 x 8)/8] = 256K bytes

Write Through

  • CPU writes to both Cache and Main memory

Write back

  • CPU writes to Cache only, and Cache controller will write to main memory at a convenient time.

Cache coherency

  • Important in multi-processor and DMA systems to ensure the cache always has the most recent data and is not old
  • When data in main memory changes, the processor's cache must have teh copy of the latest data
  • The cache uses stale data which is informed to the situation

Cache replacement policy

  • This depends on the "cache replacement policy" adopted, such as LRU, FIFO etc
  • Under LRU, the controller keeps account of which block of cache has used the least number of times
  • The least used data will be swapped out or flushed to main memory
  • Other such policies are FIFO, or randomly overwrite

Cache fill block size

  • if the CPU asks and data is note in the cache, the controller must bring it from main memory, there needs to be a correct block size since the main memory is accessed normally
  • Block size varies from 4-32, such as 32 bytes when used in older machines
  • On-chip cache is L1
  • Cache outside the CPU is L2

82395DX Cache set Entry

  • Eight independent line valid bits
  • Eight bit is the Tag Valid Bit
  • Bits 9 through 25 are 17 bit tag
  • 17 bit identifies the page number of one of the 131,07 pages in main memory
  • The Tag

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

Description

Compare SRAM (Static RAM) and DRAM (Dynamic RAM) memory cells. SRAM offers faster access but lower density, while DRAM provides higher density at a lower cost but requires refreshing. This contrast makes SRAM suitable for cache memory and DRAM ideal for main memory.

More Like This

Random Access Memory Types Quiz
16 questions
Computer Memory and Locality Concepts
8 questions

Computer Memory and Locality Concepts

CharismaticSerpentine5245 avatar
CharismaticSerpentine5245
Integrated Circuits and Memory Types
17 questions
Use Quizgecko on...
Browser
Browser