Podcast Beta
Questions and Answers
What is the primary function of DMA in data transfer?
Which type of multiprocessor system allows each processor to have its private memory?
What is one of the goals of synchronization in shared memory systems?
Which characteristic is essential for ensuring mutual exclusion in critical sections?
Signup and view all the answers
What is a semaphore used for in inter-processor communication?
Signup and view all the answers
What issue arises when multiple processors maintain their own caches?
Signup and view all the answers
How is bus contention resolved in a multiprocessor system?
Signup and view all the answers
What is an advantage of a tightly coupled multiprocessor system?
Signup and view all the answers
What defines the termination condition in critical sections?
Signup and view all the answers
What role do buses play in inter-processor communication?
Signup and view all the answers
What is the function of the Memory Management Unit (MMU) in a computing environment?
Signup and view all the answers
What is Memory Interleaving primarily used for?
Signup and view all the answers
Which of the following best describes cache memory?
Signup and view all the answers
Which RAID level provides redundancy through disk mirroring?
Signup and view all the answers
How does Direct Memory Access (DMA) benefit system performance?
Signup and view all the answers
What is the primary characteristic of Random Access Memory (RAM)?
Signup and view all the answers
What is the role of sense/write circuits in semiconductor memory chips?
Signup and view all the answers
What is the typical organization of dynamic memory chips?
Signup and view all the answers
Which of the following statements about static and dynamic memories is correct?
Signup and view all the answers
Which RAID level tolerates two simultaneous disk failures?
Signup and view all the answers
What happens during a cache miss?
Signup and view all the answers
What determines the stripe size in a RAID configuration?
Signup and view all the answers
In the context of semiconductor memory, what is meant by the term 'word line'?
Signup and view all the answers
What is the primary use of control memory in the context of micro-programmed control?
Signup and view all the answers
Which registers are included in the control memory architecture?
Signup and view all the answers
In microprogramming, what does the Control Data Register (CDR) hold?
Signup and view all the answers
What does the term 'dynamic microprogramming' refer to?
Signup and view all the answers
What function does the next address generator circuit (sequencer) perform?
Signup and view all the answers
In which scenario is a conditional branch necessary in a microprogram?
Signup and view all the answers
During addition with micro-instructions, what happens if signs of the two numbers are identical?
Signup and view all the answers
What is the purpose of the Hardware Implementation in addition/subtraction algorithms?
Signup and view all the answers
What triggers the operation of the Booth Algorithm in multiplication?
Signup and view all the answers
What is a key characteristic of the Array Multiplier compared to traditional multipliers?
Signup and view all the answers
What happens during the division algorithm when the dividend is less than the divisor?
Signup and view all the answers
Which of the following best describes the restoring method in division?
Signup and view all the answers
What is a factor that contributes to divide overflow during division operations?
Signup and view all the answers
Which of the following operations is generally simpler in floating-point arithmetic than in fixed-point arithmetic?
Signup and view all the answers
What is the initial step for adding two binary-coded decimal (BCD) numbers?
Signup and view all the answers
Study Notes
Micro-Programmed Control
- Control Memory: a special type of Random Access Memory (RAM) used in mini and mainframe computers to store temporary data.
- Advantages of control memory: faster data access compared to main memory, which speeds up CPU operations.
- Addressing in control memory: divided into task mode and executive (interrupt) mode.
- Registers in control memory: Accumulators, Indexes, Monitor clock status indicating registers, and Interrupt data registers.
- Microprogramming: control signals are generated by hardware and are represented by a control word made of 1s and 0s.
- Microprogram: a sequence of microinstructions stored in control memory.
- Microinstructions: specify internal control signals for register micro operations.
- Control Memory Address Register (CAR): specifies the address of the microinstruction.
- Control Data Register (CDR): holds the microinstruction read from memory.
- Next address generator circuit: calculates the next address while micro operations are being executed.
- Advantages of microprogrammed control: allows for flexibility with different control sequences by defining different sets of microinstructions in control memory.
- Addressing sequencing: A routine is a collection of microinstructions that implement a machine instruction.
- Mapping opcodes to microinstruction addresses: simplified by determining a "required" length for machine instruction routines, and locating the first instruction of each routine at multiples of N (the required length).
- Branch logic: determines which "next address" value is passed to the CAR.
Computer Arithmetic
- Addition and Subtraction:
- Eight different conditions can occur based on the signs of numbers being added or subtracted.
- Hardware Implementation: registers for magnitudes (A and B), flip-flops for signs (As and Bs), and an accumulator for results (A and As).
- Multiplication:
- Hardware implementation: registers for the multiplier (Q), multiplicand (B), and partial product (A), and a sequence counter (SC).
- Booth Algorithm: used for multiplying binary integers in signed 2's complement form.
- Array Multiplier: uses a combinational circuit to calculate the product in one micro operation, unlike the sequential process of the multiplication algorithm.
- Division:
- Hardware Implementation: registers for the divisor (B), dividend (A and Q), and a sequence counter (SC).
- Divide Overflow (Dividend Overflow): occurs when the quotient has more bits than the register's capacity, the value of the most significant half of the dividend is greater than or equal to the divisor, or the dividend is divided by 0.
Floating-Point Numbers
- Add/Subtract Rule: unpack sign, exponent, and fraction fields, right-shift the significand of the smaller exponent, add or subtract the significands, round the significand, and adjust the exponent.
- Multiplication and division: simpler than addition and subtraction as they don't require significand alignment.
BCD Adder
- A 4-bit binary adder that adds two 4-bit BCD words, producing a BCD-format 4-bit output and a carry if the sum exceeds 9.
The Memory System
- Addressing Scheme: determines the maximum main memory size.
- Word-Addressable: Each memory word has a distinct address.
- Byte-Addressable: Each byte has a distinct address.
- CPU-Main Memory Connection: data transfer happens via MAR (Memory Address Register) and MDR (Memory Data Register).
- Memory Cycle: 'n' bits of data are transferred between MM and CPU.
- Processor Bus:
- Address Bus: k address lines.
- Data Bus: n data lines.
- Control Bus: Read, Write, MFC (Memory Function Completed), byte specifiers, etc.
Memory Operations
- Read Operation: CPU loads address into Memory Address Register (MAR), sets Read to 1. Memory Management (MM) loads data into Memory Data Register (MDR), sets Memory Function Code (MFC) to 1.
- Write Operation: CPU loads MAR and MDR, sets Write to 1. MM loads data into the appropriate location, sets MFC to 1.
- Memory Access Time: Time between initiating and completing a memory operation.
- Memory Cycle Time: Minimum time between two successive memory operations.
Random Access Memory (RAM)
- Any location can be accessed for read/write in a fixed time, regardless of the address.
Cache Memory
- Small, fast memory between CPU and main memory.
- Holds active program segments and data.
- Locality of Reference helps CPU find data in cache most of the time (cache hit).
- Cache Misses require accessing main memory.
- Improves system performance cost-effectively.
Memory Interleaving
- Divides memory into modules, placing consecutive words in different modules.
- Allows parallel access to modules for consecutive addresses.
- Increases the average fetch rate.
Virtual Memory
- CPU-generated address is the virtual/logical address.
- Memory Management Unit (MMU) maps logical addresses to physical addresses.
- Mapping can change during execution.
- Logical address space can be larger than physical memory.
- Active portions of the logical address space are mapped to physical memory, the rest to bulk storage.
- If data is not in MM, MMU transfers a block from bulk storage to MM, replacing an inactive block.
- Creates the illusion of a large memory with a smaller, cheaper MM if transfers are infrequent.
Internal Organization of Semiconductor Memory Chips
- Organized as an array of cells, each storing one bit.
- A row of cells forms a memory word.
- Word Line: Connects cells in a row and is driven by the address decoder.
- Bit Lines: Connect cells in a column to a sense/write circuit.
- Sense/Write circuits are connected to the data input/output lines.
- Read Operation: Sense/Write circuits read data from selected cells and transmit it to output lines.
- Write Operation: Sense/Write circuits receive and store input data in selected cells.
Memory Cell Types
- Bipolar: Faster access time, higher power consumption, lower bit density.
- MOS: Slower access time, lower power consumption, higher bit density.
Typical Memory Cell
- Two-transistor inverters forming a flip-flop.
- Connected to one word line and two bit lines.
- Bit lines at ~1.6V, word line at ~2.5V, isolating the cell due to reverse-biased diodes.
- Read Operation: Word line voltage reduced to ~0.3V, forward-biasing a diode allowing current flow from b or b' based on the cell state.
- Write Operation: With word line at 0.3V, applying positive voltage (~3V) to b' or b forces the cell to 1 or 0 state.
MOS Memory Cell
- Commonly used in main memory.
- Flip-flop structure with transistors T1 and T2.
- Active pull-up to VCC via T3 and T4.
- T5 and T6 act as switches controlled by the word line.
- Read Operation: Selected cell's T5 or T6 is closed, and current flow through b or b' is sensed to set the output bit line.
- Write Operation: Positive voltage applied to the appropriate bit line of the selected cell to store 0 or 1.
Static Memories
- Maintain information as long as power is supplied.
Dynamic Memories
- Require power and periodic refresh to maintain data.
- High bit density and low power consumption.
Dynamic Memory
- Information stored as charge on a capacitor.
- Data read correctly only if read before the charge drops below a threshold.
- Read Operation: Bit line in high-impedance state, transistor turned on, sense circuit checks charge on the capacitor and refreshes it.
Typical Organization
- Square array of cells with row and column addresses from the 16-bit address.
- Row and column addresses multiplexed on 8 pins.
Access
- Row address applied first, loaded into row address latch by RAS (Row Address Strobe), then column address applied and loaded by CAS (Column Address Strobe).
- Read: Output of the selected circuit transferred to data output DO.
- Write: Data on DI line overwrites the selected cell.
- Applying a row address reads and refreshes all cells in that row.
- Refresh Circuit: Ensures data maintenance by periodically addressing each row.
- Pseudostatic: Dynamic memory chips with built-in refresh, appearing as static memories.
- Block Transfers: After loading the row address, successive locations accessed by loading only column addresses, useful for regular access patterns.
RAID (Redundant Array of Independent Disks)
- Stores data redundantly on multiple hard disks.
- Improves performance by allowing overlapped I/O operations.
- Increases fault tolerance by increasing MTBF (Mean Time Between Failures).
- Appears as a single logical hard disk to the OS.
- Uses disk mirroring or disk striping (partitioning storage space into units).
Stripe Size
- Small stripes for single-user systems with large records (e.g., 512 bytes) for fast access.
- Large stripes for multi-user systems to hold typical/maximum record size for overlapped disk I/O.
Standard RAID Levels
- RAID 0: Striping, no redundancy, best performance, no fault tolerance.
- RAID 1: Disk mirroring, data duplication, improved read performance, write performance same as single disk.
- RAID 2: Striping with dedicated disks for ECC (Error Checking and Correcting) information, no advantage over RAID 3, obsolete.
- RAID 3: Striping with one drive for parity information, uses ECC for error detection, data recovery via XOR calculation, best for single-user systems with long records.
- RAID 4: Large stripes, overlapped I/O for reads, all writes update the parity drive, no advantage over RAID 5.
- RAID 5: Block-level striping with distributed parity, functions even with one drive failure, allows read/write operations to span multiple drives, good performance, requires at least three disks (five recommended), poor choice for write-intensive systems due to parity write overhead, slow rebuild time after failure.
- RAID 6: Similar to RAID 5 but with a second parity scheme, tolerates two simultaneous disk failures, higher cost per GB, slower write performance than RAID 5.
Direct Memory Access (DMA)
- Transfers data from RAM to another part of the computer without CPU processing.
- Saves processing time for data that doesn't require CPU processing or can be processed by other devices.
- Devices using DMA are assigned to DMA channels.
- Examples: Sound cards accessing RAM data for processing, DMA-enabled video cards accessing system memory for graphics processing, Ultra DMA hard drives for faster data transfer.
- **Programmed Input/Output (PIO): **Alternative to DMA, where all data transfer goes through the CPU.
- Ultra DMA: Newer protocol for ATA/IDE interface, burst data transfer rate up to 33 Mbps.
DMA Transfer Types
- Memory to Memory Transfer: Moves data from one memory address to another using DMA channels 0 and 1, data transferred via temporary register in the DMA controller.
- Auto Initialize: After a block transfer, the current address and word count registers are automatically restored from base registers, enabling another DMA service without CPU intervention.
DMA Controller
- Manages DMA data transfers.
- Receives location, destination, and data amount from the microprocessor.
- Transfers data while the microprocessor handles other tasks.
- Doesn't arbitrate for bus control; the I/O device (DMA slave) does.
- Takes control of the bus when granted by the central arbitration control point.
DMA vs. Interrupts vs. Polling
- DMA: Works in the background without CPU intervention, speeds up data transfer and CPU speed, suitable for large files.
- Interrupts: Require CPU time, request CPU usage via interrupts, used for immediate tasks.
- Polling: CPU actively monitors the process, adjustable to device needs, suitable for devices that don't need quick response.
Multiprocessors
- Interconnection of two or more CPUs, memory, and I/O equipment.
- MIMD (Multiple Instruction, Multiple Data) category.
- Controlled by a single operating system coordinating processor activities via shared memory or inter-processor messages.
Advantages
- Increased reliability due to processor redundancy.
- Increased throughput due to parallel job execution.
Types
- Tightly Coupled/Shared Memory Processors: Information shared through common memory, each processor may also have local memory.
- Loosely Coupled/Distributed Memory Multiprocessors: Each processor has private memory, information shared via interconnection switching or message passing.
Key Characteristic
- Ability to share main memory and I/O devices through interconnection structures.
Inter-Processor Arbitration
- Buses facilitate information transfer between components.
- Memory Bus: Connects CPUs and memory.
- I/O Bus: Connects I/O devices.
- System Bus: Connects major components (CPUs, I/Os, memory).
- Processors request component access via the system bus.
- Bus Contention: Resolved through arbitration by a bus controller.
Inter-Processor Communication and Synchronization
- Shared Memory Systems: Messages written to a common memory area.
- Synchronization is needed to manage shared resources like I/O.
Critical Sections
- Resources needing protection from simultaneous access.
Assumptions
- Mutual Exclusion: Only one processor can be in a critical section at a time.
- Termination: Critical section execution completes in a finite time.
- Fair Scheduling: A process requesting entry to the critical section will eventually enter in a finite time.
Semaphore
- Binary value indicating if a processor is in the critical section.
Cache Coherence
- Each processor in a shared memory multiprocessor has its own private cache.
- Multiple copies of the cache can lead to data inconsistency (cache coherence problem).
- Cache updates by one processor need to be communicated to others to maintain consistency.
- Ensuring data consistency is crucial for system correctness.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.
Related Documents
Description
Explore the fundamentals of micro-programmed control, including the functions of control memory, advantages over main memory, and the role of microinstructions. This quiz covers key concepts such as addressing, registers, and microprogramming techniques essential for understanding computer architecture.