Lecture 9 - Computer Architecture PDF

Summary

This document presents lecture notes on computer architecture, specifically focusing on memory systems. It covers various aspects of memory organization, including ROMs and RAMs, different types of memory modules (like SDRAM DIMMs), and the concept of cache memory. It also details the processes and design elements related to memory access and utilization.

Full Transcript

Computer Science Dep. MET 2024-2025 CS 311 Computer Architecture 2024/2025 Lecture 9 Assis. Prof. Dr. Elmahdy Maree CH5 : Memory System Chapter (5) Memory System CH 5: Memory System Memory System RAM...

Computer Science Dep. MET 2024-2025 CS 311 Computer Architecture 2024/2025 Lecture 9 Assis. Prof. Dr. Elmahdy Maree CH5 : Memory System Chapter (5) Memory System CH 5: Memory System Memory System RAM ROM Lecture 9 Comp. Arch. and Org. CH 5: Memory System ROM Lecture 9 Comp. Arch. and Org. CH 5: Memory System ROM Design Lecture 9 Comp. Arch. and Org. CH 5: Memory System ROM Design SAP 1 Address ROM Address ROM 0 A3 1 I3 A2 2 I2 3 A1 I1 4 A0 I0 5. Control ROM.. 1 E 15 D3 D2 D1 D0 6 Lecture 9 Comp. Arch. and Org. CH 5: Memory System ROM Design SAP 1 Control ROM Address ROM 0 A3 1 I3 A2 2 I2 4 x 16 3 A1 I1 Decoder 4 A0 I0 5 Control ROM... 1 E 15 D15 D0 7 Lecture 9 Comp. Arch. and Org. CH 5: Memory System Types of ROMs Mask Programmed ROM Programmed during manufacturing Programmable Read-Only Memory (PROM) Blow out fuses to produce ‘0’ Erasable Programmable ROM (EPROM) Erase all data by Ultra Violet exposure Electrically Erasable PROM (EEPROM) Erase the required data using an electrical signal 8 Lecture 9 Comp. Arch. and Org. CH 5: Memory System RAM Design Internal Memory Lecture 9:Comp. Arch. and Org. CH 5: Memory System SDRAM DIMM (Dual In-line Memory RAM Design Modules): SDRAM stands for Synchronous Dynamic Random Access Memory. DIMMs allow the ability to have two rows of DRAM chips. SO DIMM (Small Outline DIMM): SO DIMMs are commonly used in notebooks and are about half the size of normal DIMMs. 11 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Memory Array 4X4 RAM Input Data 0 Address I1 BC BC BC BC Lines I0 2x4 1 Decoder BC BC BC BC 2 Memory BC BC BC BC Enable E 3 BC BC BC BC Read/Write Output Data Lecture 9:Comp. Arch. and Org. CH 5: Memory System Types of RAMs Lecture 9: Comp. Arch. and Org. CH 5: Memory System Comparison between SRAM and DRAM Lecture 9: Comp. Arch. and Org. CH 5 Memory System Cache Memory Lecture 9: Comp. Arch. and Org. CH 5: Memory System The Memory Hierarchy 16 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory Cache memory is designed to combine the memory access time of expensive, highspeed memory combined with the large memory size of less expensive, lower- speed memory. The concept is illustrated in the following figure. 17 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory Main memory consists of up to 2n addressable words consist of of a number of fixed- length blocks of K words each. That is, there are M = 2n /K blocks in main memory. The cache consists of m blocks, called lines. Each line contains K words, plus a tag of a few bits. Each line also includes control bits. The control bits, such as a bit to indicate whether the line has been modified since being loaded into the cache. The line size is the length of a line, not including tag and control bits. Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory Cache Read Operation RA=Read Address 19 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory 20 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. Further, a means is needed for determining which main memory block currently occupies a cache line. The choice of the mapping function dictates how the cache is organized. Three techniques can be used: Direct Mapping. Associative Mapping. set-associative Mapping. 21 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory: Direct Mapping. Each block of main memory maps into one unique line of the cache. The next m blocks of main memory map into the cache in the same fashion; that is, block Bm of main memory maps into line L0 of cache, block Bm+1 maps into line L1, and so on. 22 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory: Associative Mapping. ASSOCIATIVE MAPPING Associative mapping overcomes the disadvantage of direct mapping by permitting each main memory block to be loaded into any line of the cache. In this case, the cache control logic interprets a memory address simply as a Tag and a Word field. The Tag field uniquely identifies a block of main memory. To determine whether a block is in the cache, the cache control logic must simultaneously examine every line’s tag for a match.. 23 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory Set Associative Mapping. Set- Associative Mapping is a compromise that exhibits the strengths of both the direct and associative approaches while reducing their disadvantages. 24 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory Replacement Algorithms Once the cache has been filled, when a new block is brought into the cache, one of the existing blocks must be replaced. For direct mapping, there is only one possible line for any particular block, and no choice is possible. For the associative and set associative techniques, a replacement algorithm is needed. Least recently used (LRU): Replace that block in the set that has been in the cache longest with no reference to it First- In- First- Out (FIFO): Replace that block in the set that has been in the cache longest. Least Frequently Used (LFU): Replace that block in the set that has experienced the fewest references. Random: A technique not based on usage is to pick a line at random from among the candidate lines. To achieve high speed, such an algorithm must be implemented in hardware. 25 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory Write Policy write through. All write operations are made to main memory as well as to the cache, ensuring that main memory is always valid. write back With write back, updates are made only in the cache. When an update occurs, a dirty bit, or use bit, associated with the line is set. Then, when a block is replaced, it is written back to main memory if and only if the dirty bit is set. 26 Lecture 9:Comp. Arch. and Org. CH 5: Memory System Cache Memory Muli-Level cache 27 Lecture 9:Comp. Arch. and Org. THANK YOU

Use Quizgecko on...
Browser
Browser