Micro-Programmed Control Overview
38 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is the primary function of DMA in data transfer?

  • Only suitable for small files.
  • Requires active CPU monitoring.
  • Works in the background without needing CPU intervention. (correct)
  • Directly communicates with I/O devices.
  • Which type of multiprocessor system allows each processor to have its private memory?

  • Loosely Coupled Multiprocessors. (correct)
  • Tightly Coupled Multiprocessors.
  • Shared Memory Multiprocessors.
  • Single-Core Processors.
  • What is one of the goals of synchronization in shared memory systems?

  • To manage access to shared resources. (correct)
  • To allow simultaneous access to critical sections.
  • To increase bus contention.
  • To facilitate independent processor execution.
  • Which characteristic is essential for ensuring mutual exclusion in critical sections?

    <p>Only one processor can access the critical section at a time.</p> Signup and view all the answers

    What is a semaphore used for in inter-processor communication?

    <p>To indicate whether a processor is in the critical section.</p> Signup and view all the answers

    What issue arises when multiple processors maintain their own caches?

    <p>Data inconsistency due to cache coherence problems.</p> Signup and view all the answers

    How is bus contention resolved in a multiprocessor system?

    <p>Through a bus controller arbitration process.</p> Signup and view all the answers

    What is an advantage of a tightly coupled multiprocessor system?

    <p>It allows for increased reliability due to processor redundancy.</p> Signup and view all the answers

    What defines the termination condition in critical sections?

    <p>Execution of the critical section must finish in finite time.</p> Signup and view all the answers

    What role do buses play in inter-processor communication?

    <p>They connect CPUs and other system components for data transfer.</p> Signup and view all the answers

    What is the function of the Memory Management Unit (MMU) in a computing environment?

    <p>It maps logical addresses to physical addresses.</p> Signup and view all the answers

    What is Memory Interleaving primarily used for?

    <p>To allow parallel access to modules for consecutive addresses.</p> Signup and view all the answers

    Which of the following best describes cache memory?

    <p>It is a small memory for storing frequently accessed data and programs.</p> Signup and view all the answers

    Which RAID level provides redundancy through disk mirroring?

    <p>RAID 1</p> Signup and view all the answers

    How does Direct Memory Access (DMA) benefit system performance?

    <p>It offloads data transfer tasks from the CPU, saving processing time.</p> Signup and view all the answers

    What is the primary characteristic of Random Access Memory (RAM)?

    <p>It allows fixed-time access to any location regardless of address.</p> Signup and view all the answers

    What is the role of sense/write circuits in semiconductor memory chips?

    <p>To read data from selected memory cells and store data in them.</p> Signup and view all the answers

    What is the typical organization of dynamic memory chips?

    <p>A square array of cells with multiplexed row and column addresses.</p> Signup and view all the answers

    Which of the following statements about static and dynamic memories is correct?

    <p>Static memories are more expensive and have lower bit density.</p> Signup and view all the answers

    Which RAID level tolerates two simultaneous disk failures?

    <p>RAID 6</p> Signup and view all the answers

    What happens during a cache miss?

    <p>The CPU accesses main memory to retrieve the required data.</p> Signup and view all the answers

    What determines the stripe size in a RAID configuration?

    <p>The performance needs of the application using the RAID.</p> Signup and view all the answers

    In the context of semiconductor memory, what is meant by the term 'word line'?

    <p>A line driven by the address decoder to connect selected cells.</p> Signup and view all the answers

    What is the primary use of control memory in the context of micro-programmed control?

    <p>Temporary storage for data used by the CPU</p> Signup and view all the answers

    Which registers are included in the control memory architecture?

    <p>Accumulators and Monitor clock status registers</p> Signup and view all the answers

    In microprogramming, what does the Control Data Register (CDR) hold?

    <p>The microinstruction read from memory</p> Signup and view all the answers

    What does the term 'dynamic microprogramming' refer to?

    <p>Microprograms that can be altered as needed</p> Signup and view all the answers

    What function does the next address generator circuit (sequencer) perform?

    <p>It computes the next address while executing micro operations</p> Signup and view all the answers

    In which scenario is a conditional branch necessary in a microprogram?

    <p>When performing operations that are not predetermined</p> Signup and view all the answers

    During addition with micro-instructions, what happens if signs of the two numbers are identical?

    <p>Add their magnitudes and assign the sign of the first number to the result</p> Signup and view all the answers

    What is the purpose of the Hardware Implementation in addition/subtraction algorithms?

    <p>Stores signs in registers and performs operations via flip-flops</p> Signup and view all the answers

    What triggers the operation of the Booth Algorithm in multiplication?

    <p>When the first least significant bit is 1 in a string of 1s in the multiplier</p> Signup and view all the answers

    What is a key characteristic of the Array Multiplier compared to traditional multipliers?

    <p>It calculates the product in a single micro operation</p> Signup and view all the answers

    What happens during the division algorithm when the dividend is less than the divisor?

    <p>The quotient bit remains '0'</p> Signup and view all the answers

    Which of the following best describes the restoring method in division?

    <p>Restores the partial remainder by adding the divisor to the negative result</p> Signup and view all the answers

    What is a factor that contributes to divide overflow during division operations?

    <p>The quotient being higher than the register's capacity</p> Signup and view all the answers

    Which of the following operations is generally simpler in floating-point arithmetic than in fixed-point arithmetic?

    <p>Multiplication and division</p> Signup and view all the answers

    What is the initial step for adding two binary-coded decimal (BCD) numbers?

    <p>Add the two BCD words and check if the sum exceeds 9</p> Signup and view all the answers

    Study Notes

    Micro-Programmed Control

    • Control Memory: a special type of Random Access Memory (RAM) used in mini and mainframe computers to store temporary data.
    • Advantages of control memory: faster data access compared to main memory, which speeds up CPU operations.
    • Addressing in control memory: divided into task mode and executive (interrupt) mode.
    • Registers in control memory: Accumulators, Indexes, Monitor clock status indicating registers, and Interrupt data registers.
    • Microprogramming: control signals are generated by hardware and are represented by a control word made of 1s and 0s.
    • Microprogram: a sequence of microinstructions stored in control memory.
    • Microinstructions: specify internal control signals for register micro operations.
    • Control Memory Address Register (CAR): specifies the address of the microinstruction.
    • Control Data Register (CDR): holds the microinstruction read from memory.
    • Next address generator circuit: calculates the next address while micro operations are being executed.
    • Advantages of microprogrammed control: allows for flexibility with different control sequences by defining different sets of microinstructions in control memory.
    • Addressing sequencing: A routine is a collection of microinstructions that implement a machine instruction.
    • Mapping opcodes to microinstruction addresses: simplified by determining a "required" length for machine instruction routines, and locating the first instruction of each routine at multiples of N (the required length).
    • Branch logic: determines which "next address" value is passed to the CAR.

    Computer Arithmetic

    • Addition and Subtraction:
      • Eight different conditions can occur based on the signs of numbers being added or subtracted.
      • Hardware Implementation: registers for magnitudes (A and B), flip-flops for signs (As and Bs), and an accumulator for results (A and As).
    • Multiplication:
      • Hardware implementation: registers for the multiplier (Q), multiplicand (B), and partial product (A), and a sequence counter (SC).
      • Booth Algorithm: used for multiplying binary integers in signed 2's complement form.
      • Array Multiplier: uses a combinational circuit to calculate the product in one micro operation, unlike the sequential process of the multiplication algorithm.
    • Division:
      • Hardware Implementation: registers for the divisor (B), dividend (A and Q), and a sequence counter (SC).
      • Divide Overflow (Dividend Overflow): occurs when the quotient has more bits than the register's capacity, the value of the most significant half of the dividend is greater than or equal to the divisor, or the dividend is divided by 0.

    Floating-Point Numbers

    • Add/Subtract Rule: unpack sign, exponent, and fraction fields, right-shift the significand of the smaller exponent, add or subtract the significands, round the significand, and adjust the exponent.
    • Multiplication and division: simpler than addition and subtraction as they don't require significand alignment.

    BCD Adder

    • A 4-bit binary adder that adds two 4-bit BCD words, producing a BCD-format 4-bit output and a carry if the sum exceeds 9.

    The Memory System

    • Addressing Scheme: determines the maximum main memory size.
    • Word-Addressable: Each memory word has a distinct address.
    • Byte-Addressable: Each byte has a distinct address.
    • CPU-Main Memory Connection: data transfer happens via MAR (Memory Address Register) and MDR (Memory Data Register).
    • Memory Cycle: 'n' bits of data are transferred between MM and CPU.
    • Processor Bus:
      • Address Bus: k address lines.
      • Data Bus: n data lines.
      • Control Bus: Read, Write, MFC (Memory Function Completed), byte specifiers, etc.

    Memory Operations

    • Read Operation: CPU loads address into Memory Address Register (MAR), sets Read to 1. Memory Management (MM) loads data into Memory Data Register (MDR), sets Memory Function Code (MFC) to 1.
    • Write Operation: CPU loads MAR and MDR, sets Write to 1. MM loads data into the appropriate location, sets MFC to 1.
    • Memory Access Time: Time between initiating and completing a memory operation.
    • Memory Cycle Time: Minimum time between two successive memory operations.

    Random Access Memory (RAM)

    • Any location can be accessed for read/write in a fixed time, regardless of the address.

    Cache Memory

    • Small, fast memory between CPU and main memory.
    • Holds active program segments and data.
    • Locality of Reference helps CPU find data in cache most of the time (cache hit).
    • Cache Misses require accessing main memory.
    • Improves system performance cost-effectively.

    Memory Interleaving

    • Divides memory into modules, placing consecutive words in different modules.
    • Allows parallel access to modules for consecutive addresses.
    • Increases the average fetch rate.

    Virtual Memory

    • CPU-generated address is the virtual/logical address.
    • Memory Management Unit (MMU) maps logical addresses to physical addresses.
    • Mapping can change during execution.
    • Logical address space can be larger than physical memory.
    • Active portions of the logical address space are mapped to physical memory, the rest to bulk storage.
    • If data is not in MM, MMU transfers a block from bulk storage to MM, replacing an inactive block.
    • Creates the illusion of a large memory with a smaller, cheaper MM if transfers are infrequent.

    Internal Organization of Semiconductor Memory Chips

    • Organized as an array of cells, each storing one bit.
    • A row of cells forms a memory word.
    • Word Line: Connects cells in a row and is driven by the address decoder.
    • Bit Lines: Connect cells in a column to a sense/write circuit.
    • Sense/Write circuits are connected to the data input/output lines.
    • Read Operation: Sense/Write circuits read data from selected cells and transmit it to output lines.
    • Write Operation: Sense/Write circuits receive and store input data in selected cells.

    Memory Cell Types

    • Bipolar: Faster access time, higher power consumption, lower bit density.
    • MOS: Slower access time, lower power consumption, higher bit density.

    Typical Memory Cell

    • Two-transistor inverters forming a flip-flop.
    • Connected to one word line and two bit lines.
    • Bit lines at ~1.6V, word line at ~2.5V, isolating the cell due to reverse-biased diodes.
    • Read Operation: Word line voltage reduced to ~0.3V, forward-biasing a diode allowing current flow from b or b' based on the cell state.
    • Write Operation: With word line at 0.3V, applying positive voltage (~3V) to b' or b forces the cell to 1 or 0 state.

    MOS Memory Cell

    • Commonly used in main memory.
    • Flip-flop structure with transistors T1 and T2.
    • Active pull-up to VCC via T3 and T4.
    • T5 and T6 act as switches controlled by the word line.
    • Read Operation: Selected cell's T5 or T6 is closed, and current flow through b or b' is sensed to set the output bit line.
    • Write Operation: Positive voltage applied to the appropriate bit line of the selected cell to store 0 or 1.

    Static Memories

    • Maintain information as long as power is supplied.

    Dynamic Memories

    • Require power and periodic refresh to maintain data.
    • High bit density and low power consumption.

    Dynamic Memory

    • Information stored as charge on a capacitor.
    • Data read correctly only if read before the charge drops below a threshold.
    • Read Operation: Bit line in high-impedance state, transistor turned on, sense circuit checks charge on the capacitor and refreshes it.

    Typical Organization

    • Square array of cells with row and column addresses from the 16-bit address.
    • Row and column addresses multiplexed on 8 pins.

    Access

    • Row address applied first, loaded into row address latch by RAS (Row Address Strobe), then column address applied and loaded by CAS (Column Address Strobe).
    • Read: Output of the selected circuit transferred to data output DO.
    • Write: Data on DI line overwrites the selected cell.
    • Applying a row address reads and refreshes all cells in that row.
    • Refresh Circuit: Ensures data maintenance by periodically addressing each row.
    • Pseudostatic: Dynamic memory chips with built-in refresh, appearing as static memories.
    • Block Transfers: After loading the row address, successive locations accessed by loading only column addresses, useful for regular access patterns.

    RAID (Redundant Array of Independent Disks)

    • Stores data redundantly on multiple hard disks.
    • Improves performance by allowing overlapped I/O operations.
    • Increases fault tolerance by increasing MTBF (Mean Time Between Failures).
    • Appears as a single logical hard disk to the OS.
    • Uses disk mirroring or disk striping (partitioning storage space into units).

    Stripe Size

    • Small stripes for single-user systems with large records (e.g., 512 bytes) for fast access.
    • Large stripes for multi-user systems to hold typical/maximum record size for overlapped disk I/O.

    Standard RAID Levels

    • RAID 0: Striping, no redundancy, best performance, no fault tolerance.
    • RAID 1: Disk mirroring, data duplication, improved read performance, write performance same as single disk.
    • RAID 2: Striping with dedicated disks for ECC (Error Checking and Correcting) information, no advantage over RAID 3, obsolete.
    • RAID 3: Striping with one drive for parity information, uses ECC for error detection, data recovery via XOR calculation, best for single-user systems with long records.
    • RAID 4: Large stripes, overlapped I/O for reads, all writes update the parity drive, no advantage over RAID 5.
    • RAID 5: Block-level striping with distributed parity, functions even with one drive failure, allows read/write operations to span multiple drives, good performance, requires at least three disks (five recommended), poor choice for write-intensive systems due to parity write overhead, slow rebuild time after failure.
    • RAID 6: Similar to RAID 5 but with a second parity scheme, tolerates two simultaneous disk failures, higher cost per GB, slower write performance than RAID 5.

    Direct Memory Access (DMA)

    • Transfers data from RAM to another part of the computer without CPU processing.
    • Saves processing time for data that doesn't require CPU processing or can be processed by other devices.
    • Devices using DMA are assigned to DMA channels.
    • Examples: Sound cards accessing RAM data for processing, DMA-enabled video cards accessing system memory for graphics processing, Ultra DMA hard drives for faster data transfer.
    • **Programmed Input/Output (PIO): **Alternative to DMA, where all data transfer goes through the CPU.
    • Ultra DMA: Newer protocol for ATA/IDE interface, burst data transfer rate up to 33 Mbps.

    DMA Transfer Types

    • Memory to Memory Transfer: Moves data from one memory address to another using DMA channels 0 and 1, data transferred via temporary register in the DMA controller.
    • Auto Initialize: After a block transfer, the current address and word count registers are automatically restored from base registers, enabling another DMA service without CPU intervention.

    DMA Controller

    • Manages DMA data transfers.
    • Receives location, destination, and data amount from the microprocessor.
    • Transfers data while the microprocessor handles other tasks.
    • Doesn't arbitrate for bus control; the I/O device (DMA slave) does.
    • Takes control of the bus when granted by the central arbitration control point.

    DMA vs. Interrupts vs. Polling

    • DMA: Works in the background without CPU intervention, speeds up data transfer and CPU speed, suitable for large files.
    • Interrupts: Require CPU time, request CPU usage via interrupts, used for immediate tasks.
    • Polling: CPU actively monitors the process, adjustable to device needs, suitable for devices that don't need quick response.

    Multiprocessors

    • Interconnection of two or more CPUs, memory, and I/O equipment.
    • MIMD (Multiple Instruction, Multiple Data) category.
    • Controlled by a single operating system coordinating processor activities via shared memory or inter-processor messages.

    Advantages

    • Increased reliability due to processor redundancy.
    • Increased throughput due to parallel job execution.

    Types

    • Tightly Coupled/Shared Memory Processors: Information shared through common memory, each processor may also have local memory.
    • Loosely Coupled/Distributed Memory Multiprocessors: Each processor has private memory, information shared via interconnection switching or message passing.

    Key Characteristic

    • Ability to share main memory and I/O devices through interconnection structures.

    Inter-Processor Arbitration

    • Buses facilitate information transfer between components.
    • Memory Bus: Connects CPUs and memory.
    • I/O Bus: Connects I/O devices.
    • System Bus: Connects major components (CPUs, I/Os, memory).
    • Processors request component access via the system bus.
    • Bus Contention: Resolved through arbitration by a bus controller.

    Inter-Processor Communication and Synchronization

    • Shared Memory Systems: Messages written to a common memory area.
    • Synchronization is needed to manage shared resources like I/O.

    Critical Sections

    • Resources needing protection from simultaneous access.

    Assumptions

    • Mutual Exclusion: Only one processor can be in a critical section at a time.
    • Termination: Critical section execution completes in a finite time.
    • Fair Scheduling: A process requesting entry to the critical section will eventually enter in a finite time.

    Semaphore

    • Binary value indicating if a processor is in the critical section.

    Cache Coherence

    • Each processor in a shared memory multiprocessor has its own private cache.
    • Multiple copies of the cache can lead to data inconsistency (cache coherence problem).
    • Cache updates by one processor need to be communicated to others to maintain consistency.
    • Ensuring data consistency is crucial for system correctness.

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Related Documents

    Description

    Explore the fundamentals of micro-programmed control, including the functions of control memory, advantages over main memory, and the role of microinstructions. This quiz covers key concepts such as addressing, registers, and microprogramming techniques essential for understanding computer architecture.

    More Like This

    Use Quizgecko on...
    Browser
    Browser