Unit - III: Single Bus Organization of the Datapath in a Processor PDF

Document Details

Uploaded by Deleted User

Prof. V. Saritha

Tags

computer architecture processor design single bus organization computer science

Summary

This document provides a detailed overview of the single bus organization of a processor's datapath. It covers fundamental concepts, including program counter (PC), memory address register (MAR), memory data register (MDR), arithmetic logic unit (ALU), and instruction register (IR). The document also touches upon register transfers and memory cells, offering an essential introduction to computer architecture.

Full Transcript

UNIT – III Single Bus Organization of the datapath inside a processor  PC – Program Counter  MAR – Memory Address Register  MDR – Memory Data Register  ALU – Arithmetic Logic Unit  IR – Instruction Register  R0 – R(n-1) – General Purpose Register  Special Purpose Registers – Stack Pointe...

UNIT – III Single Bus Organization of the datapath inside a processor  PC – Program Counter  MAR – Memory Address Register  MDR – Memory Data Register  ALU – Arithmetic Logic Unit  IR – Instruction Register  R0 – R(n-1) – General Purpose Register  Special Purpose Registers – Stack Pointer and Index Register Prof. V. Saritha Single Bus Organization cont.,  MDR has 2 inputs and 2 outputs.  Data may be loaded into MDR either from the memory bus or from the internal processor bus  The data stored in MDR may be placed on either bus  The input of MAR is connected to the internal bus, and its output is connected to the external bus.  The control unit is responsible for issuing the signals that control the operation of all the units inside the processor and for interacting with the memory bus.  X, Y and Temp are used by the processor for temporary storage during execution of some instructions Prof. V. Saritha Single Bus Organization cont.,  The MUX selects either the input from register X or constant 4 as input of ALU.  Constant 4 is used to increment the contents of the program counter.  The decoder generates the control signals needed to select the registers involved and direct the transfer of data.  The registers, the ALU and the interconnecting bus are collectively referred as datapath Prof. V. Saritha Register Transfers  R4 ← R1  Enable the output of register R1 by setting R1out ← 1. This places the contents of R1 on the processor bus  Enable the input of register R4 by setting R4in ← 1. This loads data from the processor bus into register R4 Prof. V. Saritha Conceptual memory cell – static RAM cell 1 Select 1 1 1 1 0 Data in 1 0 D 0 1 Data out 0 1 1 R/W 0 So, Tri-state buffer is in high Prof. V. Saritha state and buffers the value impedance Performing an Arithmetic or Logic Operation  R3 ← R1 + R2  R1out, Xin  R2out, Select X, Add, Yin  Yout, R3in Prof. V. Saritha Fetching a Word from Memory  To fetch a word of information from memory, the processor has to specify the address of the memory location where this information is stored and request a Read operation.  This applies whether the information to be fetched represents an instruction in a program or an operand specified by an instruction.  The processor transfers the required address to the MAR, whose output is connected to the address lines of the memory bus.  At the same time, the processor uses the control lines of the memory bus to indicate that a Read operation is needed.  When the requested data are received from the memory, they are stored in register MDR, from where they can be transferred to other registers in the processor. Prof. V. Saritha Timing of a Read Operation  MFC – Memory Function Completed Prof. V. Saritha IAS Computer  Developed by John Von Neumann in 1940 at Princeton University.  In IAS computer, IAS stands for Institute for Advanced Studies Prof. V. Saritha Organization of Von-Neumann Machine (IAS Computer)  The task of entering and altering programs for ENIAC was extremely tedious  Stored program concept – says that the program is stored in the computer along with any relevant data  A stored program computer consists of a processing unit and an attached memory system. Prof. V. Saritha IAS Computer Prof. V. Saritha Structure of Von Neumann Machine Prof. V. Saritha Memory of the IAS  1000 storage locations called words.  Word length - 40 bits.  A word may contain:  A numbers stored as 40 binary digits (bits) – sign bit + 39 bit value  An instruction-pair. Each instruction:  An opcode (8 bits)  An address (12 bits) – designating one of the 1000 words in memory. Prof. V. Saritha Instruction Format of IAS Computer Prof. V. Saritha Von Neumann Machine  MBR: Memory Buffer Register - contains the word to be stored in memory or just received from memory.  MAR: Memory Address Register - specifies the address in memory of the word to be stored or retrieved.  IR: Instruction Register - contains the 8-bit opcode currently being executed.  IBR: Instruction Buffer Register - temporary store for RHS instruction from word in memory.  PC: Program Counter - address of next instruction-pair to fetch from memory.  AC: Accumulator & MQ: Multiplier quotient - holds operands and results of ALU ops. PCU Prof. V. Saritha 7= 73 ACAC MQ MEMORY 1. LOAD M(X) 500, ADD M(X) 501 2. STOR M(X) 500, (Other Ins)..... 500. 3 501. 4 500 MBR MBR LOAD ADD M(X) MBR =500 = M(X) 37 4 501 (OtherSTOR Ins) M(X) PC 21 MAR 501 500 21 MBR LOAD STOR M(X) M(X) 500, 500, 3 ADD 4 (Other M(X) Ins)501 IR LOAD STOR ADD M(X) M(X) IBR ADD (Other M(X)Ins) 501 AC 7 3 501 IBR Add M(X) PC PC← Mar MAR ==PC 12 ←PC LOAD M(X) 500, 3 ADD M(X) 501 4 STOR M(X) 500, (Other Ins) IR MAR MARadd== 501 12 501 MAR==500 MAR add =500 add = add = 500 add==12 500 Prof. V. Saritha Addressing Modes Introduction  Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs.  The various addressing modes that are defined in a given instruction set architecture define how machine language instructions in that architecture identify the operand(s) of each instruction.  The effective address is the address of operand on which the operation is to be performed. Prof. V. Saritha Various Addressing Modes  Implied Addressing Mode  Immediate Addressing Mode  Direct Addressing Mode  Indirect Addressing Mode  Register Direct Addressing Mode  Register Indirect Addressing Mode  Displacement Addressing Mode (combines the direct addressing and register addressing modes)  Relative Addressing Mode  Indexed Addressing Mode  Base Addressing Mode  Auto Increment and Auto Decrement Addressing Mode Prof. V. Saritha Implied (implicit) Addressing Mode  No address field is required  does not explicitly specify an effective address  Operand is implied / implicit  Ex:  Complementing Accumulator  Set or Clearing the flag bits (CLC, STC etc.)  0 – address instructions in a stack organized computer are implied mode instructions.  Effective Address (EA) = AC or Stack[SP]  Ex: Tomorrow, I am on leave (implies that there is no CAO class)  Come to my cabin (implies to come to 511A-21 SJT) Prof. V. Saritha Instruction Immediate Addressing ModeOpcode Operand  Operand is specified in the instruction itself  Useful for initializing the registers with constant value  Operand = value in address field  Ex: Mov Dx, #0034H  Advantage: No memory Reference, fast  Disadvantage: Limited operand magnitude  Ex: Come to my cabin: 511A-21 SJT Prof. V. Saritha Direct Addressing Mode  Effective address is the address part of the instruction Instruction  EA (effective address) = A Memory Opcode Address A  Ex: Mov Bx, loc  Mov CX, 4200H  Operand Advantage: Simple memory reference to access data, no additional calculations to work out effective address  Disadvantage: Limited address space  Ex: Anil, please bring my laptop from cabin no. SJT 511 – A24 Prof. V. Saritha Indirect Addressing Mode  The address field of the instruction gives the address of Instruction Opcode Address A Memory the effective address of the Pointer to operand stored in the memory. Operand  EA = (A)  Ex: Mov CX, [4200H] Operand  Advantage: Large address space, may be nested, multilevel or cascaded  Disadvantage: Multiple memory accesses to find the operand, Prof. V. Saritha Register Direct Addressing Mode  Operand is in the register specified in the address part of the instruction Instruction Registers  EA = R Opcode Register R  Ex: Mov AX, BX  Special case of direct addressing Operand  Advantage: No memory reference, shorter instructions, faster instruction fetch, very fast execution  Disadvantage: Limited address space as limited number of registers Prof. V. Saritha Register Indirect Addressing Mode  Address part of the instruction specifies the register which gives the address of the operand in memory  Special case of indirect addressing  EA = (R)  Ex: Mov BX, [DX]  Advantage: Large address space  Disadvantage: Extra memory reference Prof. V. Saritha Register Indirect Addressing Mode Prof. V. Saritha Displacement Addressing Mode  EA = A + (R)  Address field holds two values  A = Base value  R = register that holds displacement  Or vice-versa Prof. V. Saritha Relative Addressing Mode  Version of the displacement addressing  R = program counter, PC  Content of PC is added to address part of the instruction to obtain the effective address of the operand  EA = A + (PC)  Ex: JC next  It is often used in branch (conditional and unconditional) instructions, locality of reference and cache usage  Advantage: Flexibility  Disadvantage: Complexity Prof. V. Saritha Indexed Addressing Mode  A holds base address  R holds displacement, may be explicit or implicit (segment registers in 8086)  Content of the index register is added to the address part of the instruction to obtain effective address of the operand.  Used in performing iterative operations  EA = A + (SI)  Ex: Mov CX, [SI] 2400H  Advantage: Flexibility, good for accessing arrays  Disadvantage: Complexity Prof. V. Saritha Base Register Addressing Mode The content of the base register is added to the address part of the instruction to obtain the effective address of the operand. Used to facilitate the relocation of programs in memory. EA = A + (BX) Ex: Mov 2345H [BX], 0AC24H Advantage: Flexibility Disadvantage: Complexity Prof. V. Saritha Auto Increment and Auto Decrement Addressing Modes This addressing mode is used when the address stored in the register refers to a table of data in memory, it is necessary to increment or decrement the register after every access to the table. Ex: Mov AX, (BX)+, Mov AX, -(BX) Used mostly in Motorola 680X0 series of computers Prof. V. Saritha Summary Prof. V. Saritha Problem Find the effective address and the content of AC for the given data. Prof. V. Saritha Solution Addressing Mode Effective Content of AC Address Direct Address 500 AC ← (500) 800 Immediate operand 201 AC ← 500 500 Indirect address 800 AC ← ((500)) 300 Relative address 702 AC ← (PC + 500) 325 Indexed address 600 AC ← (XR + 500) 900 Register - AC ← R1 400 Register Indirect Autoincrement 400 AC ← (R1) 700 Autodecrement 400 AC ← (R1)+ 700 399 AC ← -(R1) 450 Prof. V. Saritha Execution of a Complete Instruction (Fetch / Execute Cycle) [MAR] IR ← MBR(20:27) MAR ← MBR (28:39) Prof. V. Saritha Multiple Bus Organization  All general purpose registers are combined into a single block called the register file  The register file is said to have 3 ports.  2 ports for reading and 2 port for writing.  Buses A and B are used to transfer the source operands to the A and B inputs of the ALU, where an arithmetic or logic operation may be performed.  The result is transferred to the destination over bus C.  Increment Unit is used to increment the PC by 4.  The constant 4 at the ALU input MUX is used to increment memory addresses in LoadMultiple and StoreMultiple instructions Prof. V. Saritha Example  Consider 3 operand instruction  Add R4, R5, R6  The control sequence for executing this instruction  PC_out, R=B, MAR_in, Read, IncPC  WMFC (Wait for Memory Function Completed) - causes the processor waits for the arrival of the MFC signal  MDR_outB, R=B, IR_in  R4_outA, R5_outB, SelectA, Add, R6_in, End Prof. V. Saritha The Memory Organization Prof. V. Saritha Basic Concepts  The maximum size of the memory that can be used in any computer is determined by the addressing scheme.  For example, a 16-bit computer that generates 16-bit addresses is capable of addressing up to 2^16 = 64K memory locations.  The number of locations represents the size of the address space of the computer.  Most modern computers are byte addressable.  The big-endian arrangement is used in the 68000 processor.  The little-endian arrangement is used in Intel processors.  The ARM architecture can be configured to use either arrangement. Prof. V. Saritha Byte and Word Addressing Prof. V. Saritha Basic Concepts cont.,  The memory is usually designed to store and retrieve data in word-length quantities.  Consider, for example, a byte-addressable computer whose instructions generate 32-bit addresses.  When a 32-bit address is sent from the processor to the memory unit, the high-order 30 bits determine which word will be accessed.  If a byte quantity is specified, the low-order 2 bits of the address specify which byte location is involved.  In a Read operation, other bytes may be fetched from the memory, but they are ignored by the processor.  If the byte operation is a Write, however, the control circuitry of the memory must ensure that the contents of other bytes of the same word are not changed. Prof. V. Saritha Basic Concepts cont.,  Data transfer between the memory and the processor takes place through the use of two processor registers, usually called MAR (memory address register) and MDR (memory data register).  If MAR is k bits long and MDR is n bits long, then the memory unit may contain up to 2^k addressable locations.  During a memory cycle, n bits of data are transferred between the memory and the processor.  This transfer takes place over the processor bus, which has k address lines and n data lines.  The bus also includes the control lines, Read/Write (R/W) and Memory Function Completed (MFC) for coordinating data transfers.  Other control lines may be added to indicate the number of bytes to be transferred. Prof. V. Saritha Basic Concepts cont.,  The processor reads data from the memory by loading the address of the required memory location into the MAR register and setting the R/W’ line to 1.  The memory responds by placing the data from the addressed location onto the data lines, and confirms this action by asserting the MFC signal.  Upon receipt of the MFC signal, the processor loads the data on the data lines into the MDR register.  The processor writes data into a memory location by loading the address of this location into MAR and loading the data into MDR.  It indicates that a write operation is involved by setting the R/W’ line to 0.  If read or write operations involve consecutive address locations in the main memory, then a "block transfer" operation can be performed. Prof. V. Saritha Basic Concepts cont.,  Memory Access Time: The time between the read and MFC signals  Memory Cycle Time: The minimum time delay required between the initiation of 2 successive memory operations  Cycle Time > access time  Memory access time can be reduced using a cache memory which is small and fast memory inserted between the larger, slower main memory and the processor.  Cache memory hold currently active segments of program and data  Virtual memory is used to increase the apparent size of the physical memory Prof. V. Saritha Memory Organization  Memory cells are organized in the form of an array, in which each cell is capable of storing one bit of information.  W – Word line i  16 words of 8 bits each  CS – chip select for multichip memory  16 x 8 organization Prof. V. Saritha Memory Organization cont.,  During a Read operation, these circuits sense, or read, the information stored in the cells selected by a word line and transmit this information to the output data lines.  During a Write operation, the Sense/Write circuits receive input information and store it in the cells of the selected word. Prof. V. Saritha Organization of a 1K x 1 memory chip Prof. V. Saritha Static RAM  Holds the data as long as the power is supplied  Read operation don’t destroy the original data.  Expensive than DRAM but shorter cycle times.  Used for faster small memories like cache memory.  Uses 4 – 6 transistors to store a single bit of data  Less power consumption than DRAM  Complex construction See the Demo through the following reference http://tams-www.informatik.uni-hamburg.de/applets/sram/index.html Prof. V. Saritha Static RAM  Read Operation  In order to read the state of the SRAM cell, the word line is activated to close switches T₁ and T₂.  If the cell is in state 1, the signal on bit line b is high and the signal on bit line b' is low.  The opposite is true if the cell is in state 0.  Thus, b and b' are complements of each other.  Sense/Write circuits at the end of the bit lines monitor the state of b and b' and set the output accordingly.  Write Operation  The state of the cell is set by placing the appropriate value on bit line b and its complement on b', and then activating the word line.  This forces the cell into the corresponding state.  The required signals on the bit lines are generated by the Sense/Write circuit. Prof. V. Saritha A Static RAM Cell Bit Lines b b ' T T X Y 1 2 Word line Prof. V. Saritha SRAM  Transistor arrangement gives stable logic state  Logic State 1  C1 high, C2 low  T1 T4 off, T2 T3 on  Logic State 0  C2 high, C1 low  T2 T3 off, T1 T4 on  Address line controls two transistors T5 T6.  When signal is applied to address line, T5 and T6 are on.  Write – apply value to B & complement to B  Read – bit value is read from line B Prof. V. Saritha Asynchronous Dynamic RAM  Stores data as charge on capacitors  If capacitor is charged  Data is 1  Else  Data is 0.  Needs refreshing cycle as capacitors have a tendency of discharging.  The term dynamic refers to this tendency of the stored charge to leak away, even with power continuously applied.  Volatile  When read, data is lost. So, restoring need to be done. Prof. V. Saritha Asynchronous Dynamic RAM  DRAM cell / Word line  Consists of a transistor and a capacitor.  Transistor acts a switch  If transistor is closed  Allows current to flow  Else  No current flows Prof. V. Saritha Asynchronous Dynamic RAM  Write  Voltage signal is applied to the bit line.  High voltage – 1  Low voltage – 0  Address line is activated allowing the charge to be transferred to the capacitor  Read  Address line is activated  Charge on capacitor is fed out onto a bit line and to a sense amplifier.  Sense amplifier compares with reference value and determines if the cell contains 0 or 1.  The value is restored  Used for large memory requirements Prof. V. Saritha SRAM vs DRAM SRAM DRAM - Volatile - Volatile - Faster - Slower - Smaller memory units - Larger memory units - Complex construction - Simpler to build - Don’t require refreshing circuit - Require refresh - Cache memory - Main memory - Digital - Analog - Expensive - Less expensive Prof. V. Saritha Internal Organization of a 2M x 8 Dynamic Memory Chip RAS – Row Address Strobe CAS – Column Address Strobe Prof. V. Saritha Asynchronous DRAM  In the DRAM, the timing of the memory device is controlled asynchronously.  A specialized memory controller circuit provides the necessary control signals, RAS and CAS, that govern the timing.  Such memories are referred to as asynchronous DRAMs.  Because of their high density and low cost, DRAMs are widely used in the memory units of computers.  The block transfer capability is referred as fast page mode Prof. V. Saritha Synchronous DRAM  DRAMs whose operation is directly synchronized with a clock signal are synchronous DRAMs Prof. V. Saritha Burst Read of Length 4 in SDRAM Prof. V. Saritha Burst Read of Length 4 in SDRAM cont.,  First, the row address is latched under control of the RAS’ signal.  The memory typically takes 2 or 3 clock cycles to activate the selected row.  Then, the column address is latched under control of the CAS’ signal.  After a delay of one clock cycle, the first set of data bits is placed on the data lines.  The SDRAM automatically increments the column address to access the next three sets of bits in the selected row, which are placed on the data lines in the next 3 clock cycles.  SDRAMs have built-in refresh circuitry.  In a typical SDRAM, each row must be refreshed at least every 64 ms. Prof. V. Saritha Latency  Memory Latency is referred as the amount of time it takes to transfer a word of data from or to the memory.  In block transfers, the latency is defined to be the amount of time it takes to transfer the first word of data  This time is usually substantially longer than the time needed to transfer each subsequent word of a block.  For instance, in the timing diagram, the access cycle begins with the assertion of the RAS signal. The first word of data is transferred five clock cycles later. Thus, the latency is five clock cycles. If the clock rate is 100 MHz, then the latency is 50 ns. The remaining three words are transferred in consecutive clock cycles. Prof. V. Saritha Bandwidth  Number of bits or bytes that can be transferred in one second is referred as memory Bandwidth  It depends on the speed of access and transmission along a single wire, as well as on the number of bits that can be transferred in parallel, namely the number of wires.  Bandwidth = rate at which data are transferred x width of the data bus Prof. V. Saritha Double Data Rate SDRAM  Transfers the data on both edges of the clock.  The latency of these devices is the same as for the standard SDRAMs,  Since, they transfer data on both edges of the clock, their bandwidth is essentially doubled for long burst transfers. Prof. V. Saritha Structure of Larger Memories Prof. V. Saritha Memory Design Available Memory chip Size MN, W: N × W Required memory size: N1 × W1, Where N1 ≥ N and W1 ≥ W Required number of MN, W chips: p × q, Where p = N1 / N and q = W1 / W Prof. V. Saritha Memory design There are 3 types of organizations of N1 × W1 that can be formed using N × W N1 = N and W1 > W => increasing the word size of the chip (Horizontal Expansion) N1 > N and W1 = W => increasing the number of words in the memory (Vertical Expansion) N1 > N and W1 > W => increasing both the number of words and number of bits in each word. (Matrix Expansion) Prof. V. Saritha There are different types of organization of N1 x W1 –memory using N x W –bit chips How many 1024x 8 RAM chips are needed to provide a memory capacity of 2048 x 8? Case 1: If NI > N & WI = W NI Increase number of words by the factor of p = N How many 1024x 4 RAM chips are needed to provide a memory capacity of 1024 x 8? Case 2: If NI = N & WI > W Increase the word size of a Memory by a factor of q = W I W How many 1024x 4 RAM chips are needed to provide a memory capacity of 2048 x 8? Case 3: If NI > N & WI > W Increase number of words by the factor of p & Increase the word size of a Memory by a factor of q Prof. V. Saritha Memory design – Increasing the word size Problem - 1 Design 128 × 16 - bit RAM using 128 × 4 - bit RAM Solution: p = 128 / 128 = 1; q = 16 / 4 = 4 Therefore, p × q = 1 × 4 = 4 memory chips of size 128 × 4 are required to construct 128 × 16 bit RAM S.No Memory N × W N1 × W1 p q p*q x y z Total Type 1 RAM 128 × 4 128 × 16 1 4 4 7 0 0 7 x – number of address lines y (p = 2y) – to select one among the same type of memory z – to select the type of memory Prof. V. Saritha Memory Address Map Component Hexadecimal address Address Bus From To 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 RAM 1.1 0000 007F x 1 0 1 0 x 1 x 1 0 0 x 1 x 1 0 x 1 0 x 0 RAM 1.2 0000 007F x x x x x x x RAM 1.3 0000 007F x x x x x x x RAM 1.4 0000 007F x x x x x x x Substitute 0 in place of x to get ‘From’ address and 1 to get ‘To’ address Prof. V. Saritha Memory design – Increasing the word size Data Bus 16 4 4 4 4 Data (0-3) Data (0-3) Data (0-3) Data (0-3) Address (0-6) Address (0-6) Address (0-6) Address (0-6) 128 × 4 128 × 4 128 × 4 128 × 4 RAM RAM RAM RAM Address CS R/W CS R/W CS R/W CS R/W Bus 7 Chip Select Read/write Control Prof. V. Saritha Memory design – Increasing the word size Data r/w 6-0 16 7 128 x 4 128 x 4 128 x 4 128 x 4 RAM RAM RAM RAM 4 4 4 4 1 Prof. V. Saritha Memory Design – Increasing the number of words Problem - 2 Design 1024 × 8 - bit RAM using 256 × 8 - bit RAM Solution: p = 1024 / 256 = 4; q = 8 / 8 = 1 Therefore, p × q = 4 × 1 = 4 memory chips of size 256 × 8 are required to construct 1024 × 8 bit RAM S.NO Memory NxW N1 x W 1 P q p*q x y z Total 1 RAM 256 × 8 1024 × 8 4 1 4 8 2 0 10 2 3 4 Prof. V. Saritha Memory Address Map Component Hexadecimal address Address Bus From To 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 RAM 1 0000 00FF x0 1 0 0 1 x 1 0 0 x 1 x 1 0 0 x 1 x 1 0 x 1 0 x 0 RAM 2 0100 01FF 0 1 x x x x x x x x RAM 3 0200 02FF 1 0 x x x x x x x x RAM 4 0300 03FF 1 1 x x x x x x x x Substitute 0 in place of x to get ‘From’ address and 1 to get ‘To’ address Prof. V. Saritha 256 × 8 987-0 RAM 1 Data r/w 8 8 256 × 8 RAM 2 Memory Design – 2×4 8 Increasing the Decoder number of 3 2 1 0 256 × 8 words RAM 3 8 256 × 8 RAM 4 8 Prof. V. Saritha Memory Design Problem - 3 Design 256 × 16 – bit RAM using 128 × 8 – bit RAM chips S.NO Memory NxW N1 x W 1 P q p*q x y z Total 1 RAM 128 × 8 256 × 16 2 2 4 7 1 0 8 2 3 4 Prof. V. Saritha Memory Address Map Component Hexadecimal address Address Bus From To 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 RAM 1.1 0000 007F 0 x x x x x x x RAM 1.2 0000 007F 0 x x x x x x x RAM 2.1 0080 00FF 1 x x x x x x x RAM 2.2 0080 00FF 1 x x x x x x x Prof. V. Saritha 76-0 Data r/w Address Bus 128 × 8 128 × 8 RAM 1.1 RAM 1.2 1×2 Decoder 1 0 16 8 8 128 × 8 128 × 8 RAM 2.1 RAM 2.2 16 8 8 Prof. V. Saritha Memory design Problem - 4 Design 256 × 16 – bit RAM using 256 × 8 – bit RAM chips and 256 × 8 – bit ROM using 128 × 8 – bit ROM S.NO chips.N x W Memory N1 x W 1 P q p*q x y z Total 1 RAM 256 × 256 × 1 2 2 8 0 1 9 8 16 2 Rom 128 × 256 × 8 2 1 2 7 1 1 9 8 3 4 Prof. V. Saritha Memory Address Map Component Hexadecimal address Address Bus From To 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 RAM 1.1 0000 00FF 0 x x x x x x x x RAM 1.2 0000 00FF 0 x x x x x x x x ROM 1 0100 017F 1 0 x x x x x x x ROM 2 0180 01FF 1 1 x x x x x x x Prof. V. Saritha Address Bus 876-0 128 × 8 ROM 1 Data r/w 1×2 1×2 8 Decoder Decoder 1 0 1 0 128 × 8 ROM 2 8 256 × 8 256 × 8 RAM 1.1 RAM 1.2 16 8 8 Prof. V. Saritha Exercise  A computer employs RAM chips of 1024 x 8 and ROM chips of 2048 x 4. The computer system needs 2K bytes of RAM, and 2K bytes of ROM.  How many RAM and ROM chips are needed?  How many lines of the address bus must be used to access Computer system memory? How many of these lines will be common to all chips?  How many lines must be decoded for chip select? Specify the size of the decoder  Draw a memory-address map for the system and Give the address range in hexadecimal for RAM, ROM  Develop a chip layout for the above said specifications Prof. V. Saritha Memory Controller  A typical processor issues all bits of an address at the same time.  The required multiplexing of address bits is usually performed by a memory controller circuit.  The controller then forwards the row and column portions of the address to the memory and generates the RAS’ and CAS’ signals.  It also sends the R/W’ and CS signals to the memory  Data lines are directly connected between the processor and memory Prof. V. Saritha Memory Controller Prof. V. Saritha RAMBUS Memory  The key feature of Rambus technology is a fast signaling method used to transfer information between chips.  Instead of using signals that have voltage levels of either 0 or Vsupply to represent the logic values, the signals consist of much smaller voltage swings around a reference voltage, Vref.  The reference voltage is about 2 V, and the two logic values are represented by 0.3V swings above and below Vref.  This type of signaling is generally known as differential signaling.  Small voltage swings make it possible to have short transition times, which allows for a high speed of transmission Prof. V. Saritha RAMBUS Memory cont.,  Circuitry needed to interface to the Rambus channel is included on the chip known as Rambus DRAMs (RDRAMs)  The Original specification of Rambus  9 data lines: 8 lines for transferring a byte of data; one for parity checking  Two-channel Rambus (Direct RDRAM)  18 data lines to transfer two bytes of data at a time  No separate address lines Prof. V. Saritha Rambus Memory  Communication between the processor, or some other device that can serve as a master, and RDRAM modules, which serve as slaves, is carried out by means of packets transmitted on the data lines.  There are 3 types of packets  Request  Issued by the master indicates the type of operation that is to be performed  Contains the address of the desired memory location and includes an 8-bit count that specifies the number of bytes involved in the transfer  Acknowledge  The addressed slave responds by returning a positive acknowledgement packet if it can immediately satisfy the request.  Otherwise, it indicates it is “busy” by returning a –ve acknowledgement packet, in which case the master will try again  Data Prof. V. Saritha Read Only Memories (ROM)  Non-Volatile – retain the stored information if power is turned off.  Holds the instructions whose execution results in loading the boot program from the disk  Used extensively in embedded systems  Special writing process is needed when compared to SRAM and DRAM Prof. V. Saritha ROM  Read  The word line is activated  The transistor is closed and the voltage on the bit line drops to near 0 if there is a connection between the transistor and the ground.  It there is no connection to ground, the bit line remains high  Write  Done when it is manufactured. Prof. V. Saritha PROM (Programmable ROM)  Programmability is achieved by inserting a fuse at point P  Memory contains all 0’s before programming.  The user can insert 1s at the required locations by burning out the fuses at these locations using high current pulses.  This process is irreversible.  Provides flexibility and convenience not available in ROM.  ROM is suitable where high volumes of data is required. Prof. V. Saritha More ROMs  EPROM  Contents can be erased and reprogrammed.  Can be erased with ultraviolet light  Must be physically removed from the circuit for reprogramming  Entire contents are erased  EEPROM  Erased electrically  Do not have to be removed for erasure.  Possible to erase the cell contents selectively.  Disadvantage:  Different voltages are needed for erasing, writing and reading the stored data. Prof. V. Saritha Flash Memory  Read individual cells  Write in blocks  Prior to writing, the previous contents of the block are erased.  Lower cost per bit.  Require single power supply voltage  Consume less power.  Applications – hand-held computers, cell phones, digital cameras and MP3 music players.  Flash cards, flash drives Prof. V. Saritha Memory Hierarchy Prof. V. Saritha Memory hierarchy Prof. V. Saritha Cache Memory Introduction  Principleof locality – programs tend to reuse the data and instructions they have used recently. (locality of reference)  An implication of locality is that we can predict with reasonable accuracy what instructions and data a program will use in the near future based on its accesses in the recent past.  The principle of locality also applies to data accesses, though not as strongly as to code accesses. Prof. V. Saritha Types of locality  Temporal locality  recently accessed items are likely to be accessed in the near future.  When first accessed, move to cache where it will be when referenced again  Spatial locality  items whose addresses are near one another tend to be referenced close together in time.  When fetching an instruction from memory, move its neighbors into cache as well Prof. V. Saritha Introduction Prof. V. Saritha Parameters of Cache memory  Cache Hit  A referenced item is found in the cache by the processor  Cache Miss  A referenced item is not present in the cache  Hit ratio  Ratio of number of hits to total number of references => number of hits/(number of hits + number of Miss)  Miss penalty  Additional cycles required to serve the miss  Time required for the cache miss depends on both the latency and bandwidth  Latency – time to retrieve the first word of the block  Bandwidth – time to retrieve the rest of this block Prof. V. Saritha Introduction Start Receive Address from CPU Is Access Main Memory for No Block Containing Item the block containing the item in the cache Yes Select the cache line to receive the block from Main Memory Deliver Block To CPU Load main Memory Deliver block Block into cache To CPU Done Prof. V. Saritha Cache Memory Management Techniques Direct Mapping Block Placement Set Associative Fully Associative Tag Block Identification Index Offset Cache Memory Management Techniques FCFS Block Replacement LRU Random Write Through Write back Update Policies Write around Write allocate Prof. V. Saritha Cache Management  Mapping  Determination of where in the cache the blocks (cache lines) of main memory are to be placed.  Replacement  Determination of when to replace to a block and which block to be replaced in cache with another block of main memory. Prof. V. Saritha Block Placement Example Prof. V. Saritha Main Memory Direct Mapping 15 (MM Block address) mod (Number of lines in a cache) 14 (12) mod (8) =4 14 13 7 12 6 11 5 10 4 9 3 8 2 7 1 6 0 5 4 Cache 3 2 1 0 Prof. V. Saritha Main Memory Set Associative Mapping 15 (MM Block address) mod (Number of sets in a cache) 14 (12) mod (4) =0 14 13 7 12 3 6 11 5 10 2 4 9 3 8 1 2 7 1 6 0 0 5 4 Cache 3 2 1 0 Prof. V. Saritha Main Memory Fully Associative Mapping 15 Random 14 14 13 7 12 6 11 5 10 4 9 3 8 2 7 1 6 0 5 4 Cache 3 2 1 0 Prof. V. Saritha Direct Mapping  If match then there is hit  Else miss – word is read from the memory  It is then stored in the cache together with the new tag replacing the previous value  Disadvantage: hit ratio drops if 2 or more words with same index but different tags are accessed repeatedly.  When memory is divided into blocks of words, index field – block field and word field  Ex: 512 words cache – 64 blocks of 8words each – block field (6bits) and words field (3bits) Prof. V. Saritha Direct Mapping  Tags within the block are same.  When a miss occurs, entire block is transferred from main memory to cache.  It is time consuming but improves hit ratio because of the sequential nature of programs.  Each main memory address can be viewed as consisting of three fields.  The least significant w bits identify a unique word or byte within a block of main memory  The remaining s bits specify one of the 2s blocks of main memory.  Prof. V. Saritha Direct Mapping Prof. V. Saritha Direct Mapping Prof. V. Saritha Direct Mapping Problem A digital computer has a memory unit of 64K x 16 and a cache memory of 1K words. The cache uses direct mapping with a block-size of four words.  How many bits are there in the tag, index, block and word fields of the address format?  How many blocks the cache can accommodate? Prof. V. Saritha Fully Associative Mapping Prof. V. Saritha Fully Associative Cache Mapping Prof. V. Saritha Set-Associative Mapping  Each word of cache can store 2 or more words of memory under the same index address  The comparison logic is done by an associative search of the tags in the set similar to an associative memory search, thus the name “Set-Associative”  Hit ratio increases as the set size increases but more complex comparison logic is required when number of bits in words of cache increases  When a miss occurs and set is full, one of tag-data items are replaced using block replacement policy Prof. V. Saritha Set-Associative Mapping Prof. V. Saritha k-way Set-Associative Mapping Prof. V. Saritha Problem – 1  A set associative cache consists of 64 lines or slots, divided into four line sets. Main memory consists 4k blocks of 128 words each. Show the format of main memory addresses.  Total Number of blocks – 64  Number of blocks in each set – 4  Number of sets = Total number of blocks / number of blocks in each set = 64/4 = 16 = 2 d => d = 4  Block size – 2w = 128 => w = 7  2s+w = 4k x 128 = 219 => s+w = 19  s = 19 – w = 19 – 7 = 12  Tag = s – d = 12 – 4 = 8 Prof. V. Saritha Problem - 2  A two-way set associative cache has lines of 16 bytes and a total size of 8k bytes. The 64-Mbyte main memory is byte addressable. Show the format of main memory addresses.  Number of words in a line – 16 = 2w => w = 4  Total cache size = 8k bytes  Number of lines in a cache = 8k/16 = 512  Number of lines in a set = 2  Number of sets = number of lines in a cache / number of lines in a set = 512/2 = 256 = 28 = 2d => d = 8  Main memory address = 2s+w = 64MB = 226 => s+w = 26  s = 26 – w = 26 – 4 = 22 Prof. V. Saritha Block Replacement Algorithms  Least Recently Used: (LRU)  Replace that block in the set that has been in the cache longest with no reference to it.  First Come First Out: (FIFO)  Replace that block in the set that has been in the cache longest.  Least Frequently Used: (LFU)  Replace that block in the set that has experienced the fewest references  Random:  Randomly replaces any one of the available blocks. Prof. V. Saritha FIFO When a page must be replaced, the oldest page is chosen Prof. V. Saritha Least-recently-used (LRU) algorithm Prof. V. Saritha Least Frequently Used (LFU) Prof. V. Saritha Memory Interleaving  If the main memory is structured as a collection of physically separate modules, each with its own address buffer register (ABR) and data buffer register (DBR), memory access operations may proceed in more than one module at the same time.  Hence, aggregate rate of transmission of words to and from the memory can be increased.  Two methods of distribution of words among modules  Consecutive words in a module  Consecutive words in consecutive modules Prof. V. Saritha Consecutive words in a module  When consecutive locations are accessed, as happens when a block of data is transferred to a cache, only one module is involved. Prof. V. Saritha Consecutive words in consecutive modules  This method is called memory interleaving  Parallel access is possible. Hence, faster  Higher average utilization of the memory system Prof. V. Saritha Average Access Time  tave= hC + (1 – h)M  tave is the average access time of the processor  h – hit ratio  M – miss penalty, the time to access information in the memory  C – the time to access information in the cache Prof. V. Saritha Levels of Caches  In high-performance processors two levels of caches are normally used  L1 cache is on the processor chip  L2 cache is much larger, implemented externally using SRAM chips.  The average access time of the processor with two levels of caches is  tave= h1C1 + (1 – h1)h2C2 + (1 – h1) (1 – h2)M  Where h1 – hit rate of L1cache,  h2 – hit rate of L2 cache,  C1 – the time to access information in the L1 cache,  C2 – the time to access information in the L2 cache,  M – the time to access information in the main memory Prof. V. Saritha Update Policies - Write Through  Cache copy and main memory copy updated simultaneously.  Advantage: main memory always contains the same data as the cache  It is important during DMA transfers to ensure the data in main memory is valid  Disadvantage: slow due to memory access time Prof. V. Saritha Update Policies  Write Back  Only cache is updated during write operation and marked by flag (“dirty” or “modified”). When the word is removed from the cache, it is copied into main memory  Memory is not up-to-date, i.e., the same item in cache and memory may have different value  Write-Around  correspond to items not currently in the cache (i.e. write misses) the item could be updated in main memory only without affecting the cache.  Write-Allocate  update the item in main memory and bring the block containing the updated item into the cache. Prof. V. Saritha Other Enhancements  Write Buffer  Each write operation results in writing a new value into the main memory  The processor is slowed down by all write requests  To improve performance, a write buffer can be included for temporary storage of write requests  Prefetching  The processor has to pause until the new data arrive, which is the effect of the miss penalty.  To avoid stalling the processor, it is possible to prefetch the data into the cache before they are needed.  Prefetching can be done using software or hardware Prof. V. Saritha Other Enhancements cont.,  Prefetching stops other accesses to the cache until the prefetch is completed.  A cache of this type is said to be locked while it services a miss.  Lock-up free Cache  Cache can be accessed while a miss is being is serviced. Prof. V. Saritha Virtual Memory  Virtual memory permits the user to construct programs as though a large memory space were available.  It gives the programmer an illusion that they have a very large memory.  It provides a mechanism for translating program generated addresses into correct main memory locations by means of mapping table.  Virtual address – address used by a programmer  Address space - Set of virtual addresses  Physical address – address in main memory  Memory space – set of physical addresses. Prof. V. Saritha Mapping address space memory space Mapping virtual address (logical address) physical address address generated by programs actual main memory address Memory Management Unit translates virtual address to physical address Prof. V. Saritha Virtual Memory Organization Prof. V. Saritha Virtual Memory Address Translation  An area in the main memory that can hold one page is called a page frame  The starting address of the page table is kept in a page table base register  Address of the page table entry = virtual page number + page table base register  Control bits  One bit indicates the validity of the page – allows to invalidate the page without actually removing it.  Another bit indicates whether the page has been modified during its residency in the memory  Other control bits indicate various restrictions that may be imposed on accessing the page. Ex: read, write permissions Prof. V. Saritha Virtual Memory Address Translation cont.,  The page table information is used by the MMU for every read and write access, so ideally, the page table should be situated within the MMU.  Unfortunately, the page table may be rather large, and since the MMU is normally implemented as part of the processor chip, it is impossible to include a complete page table on this chip.  Therefore, the page table is kept in the main memory.  However, a copy of a small portion of the page table can be accommodated within the MMU.  This portion consists of the page table entries that correspond to the most recently accessed pages.  A small cache, usually called the Translation Lookaside Buffer (TLB) is incorporated into the MMU for this purpose. Prof. V. Saritha Use of an associative- mapped TLB  Given a virtual address, the MMU looks in the TLB for the referenced page  If the page table entry for this page is found in the TLB, the physical address is obtained immediately.  If there is a miss in the TLB, then the required entry is obtained from the page table in the main memory and the TLB is updated. Prof. V. Saritha Secondary Storage  Types of External Memory  Magnetic Disk  Floppy Disk  Optical Memory  Magnetic Tape Prof. V. Saritha Magnetic Disk  A disk is a circular platter constructed of a non-magnetic material called the substrate (aluminum), coated with a magnetizable material (iron oxide…rust)  Now glass —Improved surface uniformity – Increases reliability —Reduction in surface defects – Reduced read/write errors —Lower flight heights —Better stiffness —Better shock/damage resistance Prof. V. Saritha Magnetic Read and Write Mechanism  Data are recorded on and later retrieved from the disk via a conducting coil named the head  In many systems, there are two heads, a read head and a write head.  During a read or write operation, the head is stationary while the platter Read / Write Head rotates beneath it. Prof. V. Saritha Magnetic Disk Write Mechanism  Electricity flowing through a coil produces a magnetic field.  Electric pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below, with different patterns for positive and negative currents. Prof. V. Saritha Magnetic Disk Read Mechanism  Magnetic field moving relative to a coil produces an electrical current in the coil.  When the surface of the disk passes under the head, it generates a current of the same polarity as the one already recorded.  The structure of the head for reading is in this case essentially the same as for writing and therefore the same head can be used for both.  Such single heads are used in floppy disk systems and in older rigid disk systems. Prof. V. Saritha Data Organization  Tracks: Hard Disk platters arrange data into concentric circles. Each circle is called a Track.- thousands of tracks per surface. Each track is the same width as the head.  Sectors: The smallest addressable unit on a Track. Sectors are normally 512 bytes in size, and there can be hundreds of sectors per track. They may be of fixed or variable length.  Heads: The devices used to write and read data on each platter.  Cylinders: Platters on a hard disk are stacked up, and so are the heads. Concentric circles from each parallel platter form a cylinder.  Inter track gap: space between tracks to reduce errors due to misalignment of the head or interference of magnetic fields.  Intra track gap: gap between sectors to avoid unreasonable precision requirements on the system (inter sector gap) Prof. V. Saritha Disk Data Layout Prof. V. Saritha Physical Characteristics of Disk Systems  Head motion  Fixed: one read/write head per track (rare)  Movable: only one read – write head per surface  Disk portability  Removable: can be removed and replaced with the other  Non-removable: permanently mounted in the disk drive (ex: hard disk)  Sides:  Single sided: magnetizable coating applied on one side  Double sided: (usually) two sides coated  Platters:  Multiple platters: disk drives accommodate multiple platters vertically a fraction of an inch apart  Single platter  Head mechanism  Contact (floppy): head comes into contact during the operation  Fixed Gap: the read-write head has been positioned a fixed distance above the platter, allowing an air gap.  Aerodynamic Gap Prof. V. Saritha Multiple Platters Tracks and Cylinders Prof. V. Saritha Multiple Platters  Multiple–platter disks employ a movable head, with one read- write head per platter surface.  All of the heads are mechanically fixed so that all are at the same distance from the center of the disk and move together.  Thus, at any time, all of the heads are positioned over tracks that are of equal distance from the center of the disk.  The set of all the tracks in the same relative position on the platter is referred to as a cylinder. Prof. V. Saritha Disk Performance Parameters  Seek time: on a movable head, time it takes to position the head at the track.  Rotational delay/latency: the time it takes for the beginning of the sector to reach the head  Access time: seek time + latency  Time it takes to get into position to read or write  Transfer time: time required to transfer data T = b/(r*N), where b is the number of bytes to be transferred, r is the rotation speed (revolutions per second) and N is the number of bytes on a track Prof. V. Saritha Magnetic Hard Disk  In most modern disk units, the disks and the read/write heads are placed in a sealed, air-filtered enclosure.  This approach is known as Winchester technology  In such units, the read/write heads can operate closer to the magnetized track surfaces because dust particles, which are a problem in unsealed assemblies, are absent.  The closer the heads are to a track surface, the more densely the data can be packed along the track, and the closer the tracks can be to each other.  Thus, Winchester disks have a larger capacity for a given physical size compared to unsealed units. Prof. V. Saritha Disk Controller  Operation of a disk drive is controlled by a disk controller circuit, which also provides an interface between the disk drive and the bus that connects it to the rest of the computer system  The disk controller may be used to control more than one drive Prof. V. Saritha Floppy Disk  Floppy disks are smaller, simpler, and cheaper disk units that consist of a flexible, removable, plastic diskette coated with magnetic material.  The diskette is enclosed in a plastic jacket, which has an opening where the read/write head makes contact with the diskette.  A hole in the center of the diskette allows a spindle mechanism in the disk drive to position and rotate the diskette  Manchester encoding scheme is used in the floppy disks for recording the data Prof. V. Saritha RAID (redundant array of independent disks)  Redundant array of inexpensive disks  Multiple disk database design  Not a hierarchy  7 levels (6 levels in common use)  Set of physical disk drives viewed by the OS as a single logical drive  Data are distributed across the physical drives of an array  Redundant disk capacity is used to store parity information => data recoverability  Improve access time and improve reliability Prof. V. Saritha RAID – Level 0  Not a true member of the RAID family - does not include redundancy to improve performance.  User and system data is distributed across all disks in the array in strips.  Imagine a large logical disk containing ALL data. This is divided into strips (physical blocks or sectors) that are mapped ‘round robin’ to the strips in the array.  If two different I/O requests are pending for two different blocks of data – then there is a good chance that the data will be on different disks and can be serviced in parallel.  If a single I/O request is for multiple logically continuous strips – up to n strips can be handled in parallel. Prof. V. Saritha RAID Level 0 Prof. V. Saritha RAID Level 1  Redundancy achieved through duplicating all data.  Data stripping is similar to RAID level 0.  Each logic strip is mapped to two physical disks.  Read request can be serviced from either of available 2 disks, which ever involves the minimum seek time and rotational latency  Write request requires both disks to be updated – but this can be done in parallel. (Slower write dictates overall speed).  Recover from failure is simple! (data may still be accessed from the second drive  Disadvantage:  Cost  requires twice the disk space  Configuration is limited, so used only for system software and other highly critical files.  Improvement occurs if the application can split each read request so that both disk members participate Prof. V. Saritha RAID Level 1 Prof. V. Saritha RAID Level 2  Utilizes parallel access techniques - All disks participate in the execution of every I/O request.  Data striping – bit level stripping  Error correcting code is calculated across corresponding bits on each disk, and the code bits are stored in corresponding bit positions on multiple parity disks. Prof. V. Saritha RAID Level 3 – byte level stripping  Similar to RAID 2 – parallel access with data distributed in small strips.  Requires only a single redundant disk because it uses a single parity bit for the set of individual bits in the same position on all of the data disks.  If drives X0-X3 contain data, and X4 contains parity bits.  X4(i) = X3(i)  X2(i)  X1(i)  X0(i)  Redundancy – in the case of disk failure, the data can be reconstructed. If drive X1 fails – it can be reconstructed as:  X1(i) = X4(i)  X3(i)  X2(i)  X0(i)  Performance – can achieve high transfer rates, but only one I/O request can be executed at one time. (Better for large data transfers in non transaction-oriented environments). Prof. V. Saritha RAID Level 3 and RAID Level 4 Prof. V. Saritha RAID Level 4 – block level stripping  Each disk operates independently - Separate I/O requests satisfied in parallel.  Suitable for applications with high I/O request rates and NOT well suited for those requiring high data transfer rates.  Data striping. (Strips are larger than in lower RAIDs).  Bit-by-bit parity strip is calculated across corresponding strips on each data disk, and stored in corresponding strip on the parity disk.  Performance – write penalty when I/O request is small size. Write must update user data + corresponding parity bits.  X4(i) = X3(i)  X2(i)  X1(i)  X0(i)  If X1(i) is changed to X1’(i)  X4(i) = X3(i)  X2(i)  X1’(i)  X0(i) = X4(i)  X1(i)  X1’(i)  To calculate new parity, the old user and old parity strips must be read. Then it can update these two strips with the new data and the newly calculated parity. Thus each strip write involves two reads and two writes. Prof. V. Saritha RAID Level 5  Same as RAID 4 – but parity strips are distributed across all disks.  Typical allocation uses round-robin.  For an n-disk array, the parity strip is on a different disk for the first n strips and the pattern then repeats.  Avoids potential bottleneck found in RAID 4. Prof. V. Saritha RAID Level 6  Twodifferent parity calculations are carried out and stored in separate blocks on different disks.  Example: XOR and an independent data check algorithm => makes it possible to regenerate data even if two disks containing user data fail.  No. of disks required = N + 2 (where N = number of disks required for data).  Provides HIGH data availability.  Incurs substantial write penalty as each write affects two parity blocks.  Three disks would have to fail within MTTR (mean time to repair) interval to cause data to be lost Prof. V. Saritha RAID Level 6 Prof. V. Saritha Comparison of RAID Levels Prof. V. Saritha Comparison of RAID Levels Prof. V. Saritha Comparison of RAID Levels Prof. V. Saritha Optical Memory  The compact disk (CD) digital audio system is introduced in 1983  The CD is a nonerasable disk that can store more than 60 minutes of audio information on one side  Optical disk products  (Compact Disk) CD, CD – ROM, CD – R (Recordable), CD – RW (rewritable)  (Digital Versatile Disk) DVD, DVD – R, DVD – RW  BLU-RAY DVD - High Definition Disk Prof. V. Saritha CD Operation Prof. V. Saritha CD Operation  Both the audio CD and the CD-ROM (compact disk read-only memory) share a similar technology  Digitally recorded information is imprinted as a series of microscopic pits on the surface of the polycarbonate.  This is done, first of all, with a finely focused, high-intensity laser to create a master disk.  The pitted surface is then coated with a highly reflective surface, usually aluminium or gold.  This shiny surface is protected against dust and scratches by a top coat of clear acrylic.  Finally, a label can be silkscreened onto the acrylic Prof. V. Saritha CD Operation  Information is retrieved from a CD or CD-ROM by a low- powered laser housed in an optical-disk player, or drive unit.  The laser shines through the clear polycarbonate while a motor spins.  The intensity of the reflected light of the laser changes as it encounters a pit.  Specifically, if the laser beam falls on a pit, which has a somewhat rough surface, the light scatters and a low intensity is reflected back to the source. Prof. V. Saritha CD Operation  The areas between pits are called lands.  A land is a smooth surface, which reflects back at higher intensity.  The change between pits and lands is detected by a photosensor and converted into a digital signal.  The sensor tests the surface at regular intervals.  The beginning or end of a pit represents 1; when no change in elevation occurs between intervals, a 0 is recorded. Prof. V. Saritha CD-ROM  The physical imperfections cannot be avoided as pits are very small.  Hence, it is necessary to use additional bits to provide error checking and correcting capability.  Such CDs are referred as CD-ROMs. Prof. V. Saritha CD-Recordables (CD-R)  Recording can be done by a computer user  A spiral track is implemented on a disk during the manufacturing process.  A laser in a CD-R drive is used to burn pits into an organic dye on the track.  When a burned spot is heated beyond a critical temperature, it becomes opaque.  Such burned spots reflect less light when subsequently read.  The written data are stored permanently.  Unused portions of a disk can be used to store additional data at a later time. Prof. V. Saritha CD-ReWritables (CD-RWs)  CDs that can be written multiple times are CD-RWs  An alloy of silver, indium, antimony and tellurium is used for recording layer.  If this alloy is heated above its melting point (5000 C) and then cooled down, it is goes into an amorphous state in which it absorbs light.  But if it is heated only to about 2000 C and this temperature us maintained for an extended period, a process known as annealing takes place which is used for erasing.  Uses 3 different laser powers  Highest power to record the pits  Middle power to erase the contents – erase power  Lowest power to read the stored information Prof. V. Saritha DVD Technology  Digital Versatile Disk  Objective: to store a full-length movie on one side of a DVD disk  Disk thickness: 1.2 mm  Diameter: 120 mm  Design:  A red light laser with a wavelength of 635 nm is used instead of the infrared light laser used in CDs, which has a wavelength of 780 nm  Pits are smaller, having a minimum length of 0.4 micron  Tracks are placed closer; distance between tracks is 0.74 micron Prof. V. Saritha DVD – ROM Prof. V. Saritha DVD – ROM  CD – ROM capacity – 682 MB  The DVD’s greater capacity is due to three differences from CDs  Bits are packed more closely on a DVD – increases to 4.7 GB  The DVD employs a second layer of pits and lands on top of the first layer – increases to 8.5 GB  The DVD-ROM can be two sided, whereas data are recorded on only one side of a CD – increases to 17 GB Prof. V. Saritha DVD-RAM  Rewritable version of DVD devices  Large storage capacity  Disadvantage:  Higher price  Relatively slow writing speed  To ensure that the data have been recorded correctly on the disk, a process known as write verification is performed.  This is done by the DVD-RAM drive, which reads the stored contents and checks them against the original data. Prof. V. Saritha Optical Memory Characteristics Prof. V. Saritha Optical Memory Characteristics Prof. V. Saritha Optical Memory Characteristics Prof. V. Saritha Magnetic Tape Systems  Magnetic tapes are suited for off-line storage of large amounts of data  Tape systems use the same reading and recording techniques as disk systems.  Data on the tape are structured as a number of parallel tracks running lengthwise.  Earlier tape systems typically used nine tracks.  This made it possible to store data one byte at a time, with an additional parity bit as the ninth track.  The recording of data in this form is referred to as parallel recording.  Most modern systems instead use serial recording, in which data are laid out as a sequence of bits along each track, as is done with magnetic disks  The typical recording technique used in serial tapes is referred to as serpentine recording. Prof. V. Saritha Organization of Data on Magnetic Tape Prof. V. Saritha Magnetic Tape  Data are read and written in contiguous blocks, called physical records, on a tape.  Blocks on the tape are separated by gaps referred to as interrecord gaps.  Tape motion is stopped only when a record gap is underneath the read/write heads.  To help users organize large amounts of data, a group of related records is called a file.  The beginning of a file is identified by a file mark.  File mark is preceded by a gap longer than the interrecord gap.  The first record following a file mark can be used as a header or identifier for this file  When the end of the tape is reached, the heads are repositioned to record a new track, and the tape is again recorded on its whole length, this time in the opposite direction Prof. V. Saritha Magnetic Tape  A tape drive is a sequential-access device.  If the tape head is positioned at record 1, then to read record N, it is necessary to read physical records 1 through N - 1, one at a time  Magnetic tape was the first kind of secondary memory  The controller of a magnetic tape drive enables the execution of a number of control commands in addition to read and write commands.  Control commands include: Rewind tape, rewind and unload tape, erase tape, write tape mark, forward space one record, backspace one record, forward space one file, backspace one file Prof. V. Saritha Cartridge Tape System  Back-up of online disk storage  Uses 8 mm video format tape housed in a cassette  Range: 2 to 5 GB  Handle data transferred at the rate of a few hundred kilobytes per second.  Multiple-cartridge systems are available that automate the loading and unloading of cassettes so that tens of gigabytes of online storage can be backed up Prof. V. Saritha

Use Quizgecko on...
Browser
Browser