Revision Notes on Computer Number Systems and Representation (PDF)

Summary

These lecture notes cover fundamental concepts in representing numbers in computing, from decimal, binary, and hexadecimal systems to real number representation using exponential form and floating-point formats. The notes also include the IEEE 754 standard for floating-point representation and rounding considerations. These notes are relevant for computer science students at an undergraduate level.

Full Transcript

**WEEK 2 - LECTURE 2 - REVISION NOTES** **1. Information Representation and Number Systems** - **Number Systems**: - **Decimal**: Base 10 (e.g., 25.62). - **Binary**: Base 2, common in computing. - **Hexadecimal**: Base 16, often used for concise binary representatio...

**WEEK 2 - LECTURE 2 - REVISION NOTES** **1. Information Representation and Number Systems** - **Number Systems**: - **Decimal**: Base 10 (e.g., 25.62). - **Binary**: Base 2, common in computing. - **Hexadecimal**: Base 16, often used for concise binary representation. - **Conversions**: Understanding how to convert between these systems is fundamental, especially decimal to binary. **2. Real Number Representation in Computing** - **Exponential Form**: A real number A can be written as A=m×Re, where: - m: **Mantissa** -- holds the number\'s significant digits. - e: **Exponent** -- indicates the power of the base R (in computing, often base 2). - **Floating Point**: Used to represent real numbers, defined by mantissa and exponent for precise handling. **3. IEEE 754 Floating Point Format** - **Standardized Format**: IEEE 754 is implemented in most modern processors. - **Format Types**: - **Single Precision** (32 bits): 1 sign bit, 8 bits for exponent, 23 bits for mantissa. - **Double Precision** (64 bits): 1 sign bit, 11 bits for exponent, 52 bits for mantissa. - **Sign Bit (S)**: 1 if the number is negative, 0 if positive. - **Exponent (e)**: Requires \"normalization\" of the number, where: - For example, for A=25.62, the exponent e is calculated so that i÷2e=1 (integer part i\>0). - **Bias**: Exponent values are offset: - **Single precision** uses a bias of 127. - **Double precision** uses a bias of 1023. - e=e+bias for final representation. **4. Converting the Mantissa and Exponent** - **Mantissa Conversion**: - The mantissa m is normalized so that it fits as 1.fraction. - Convert the fractional part to binary using an algorithm that repeatedly multiplies by 2 (for binary representation). - **Exponent Adjustment**: - Calculate the exponent by counting shifts needed to normalize the mantissa. - Example: For exponent e=3, calculate final exponent by adding bias (e.g., 3+127=130 in single precision). **5. Rounding in IEEE 754 Format** - **Approximation**: - Since not all decimal fractions convert precisely to binary, rounding occurs to fit within the 23 bits (single precision). - **Rounding Mode**: - Examine the 24th bit: if it's 1, add 1 to the last bit of the 23-bit representation. - Rounding reduces error compared to truncation but introduces small inaccuracies. - **Error Bound**: In single precision, errors are within 2\^−24, limiting accuracy to about 7 decimal places. **6. Putting It All Together** - Example for representing a decimal number like 25.62: - Normalize to 1.60125×2\^4. - Determine sign (S = 0), exponent e=4+127=131 (in binary: 10000011), and convert the fraction 0.60125 to binary. - Combine these to represent 25.62 in binary form under IEEE 754. **7. Self-Assessment Exercises** - Practice converting decimal numbers to IEEE 754 (single precision) to reinforce understanding. NOTE - FOR SELF - REVISE HOW TO DO THE PUTTING IT ALL TOGETHER BECAUSE ITS DIFFICULT TO UNDERSTAND **WEEK 3 - LECTURE 1 - REVISION NOTES** **1. Overview of the Lecture** - **Objective:** Introduces foundational concepts in computer architecture, memory organization, and data manipulation. - **Learning Outcomes:** - Understand the roles of the CPU and memory. - Learn about memory organization and how data/instructions are stored and accessed. **2. Key Concepts in Computer Architecture** - **Definition:** A computer is composed of interconnected processors, memory, and I/O devices. - **Data Manipulation:** - Moving data between locations. - Performing arithmetic and logic calculations. **3. CPU and Memory Overview** - **Components of the CPU:** - **Arithmetic/Logic Unit (ALU):** Executes operations like addition and division. - **Control Unit:** Coordinates activities by managing timing and control signals. - **Registers:** - General-Purpose Registers (GPR): Temporary storage for manipulated data. - Special-Purpose Registers (SPR): Store critical information, e.g., the Program Counter (PC). - **Memory Basics:** - Stores programs and data as bit patterns. - Instructions fetched and executed sequentially by the CPU. **4. CPU Architectures: CISC vs. RISC** - **CISC (Complex Instruction Set Computer):** - Many instructions, variable-length encoding. - High power consumption. - Example: Intel processors. - **RISC (Reduced Instruction Set Computer):** - Few instructions, fixed-length encoding. - Efficient, simple, and fast. - Example: ARM processors used in smartphones. **5. Memory Organization** - **Structure:** - Memory consists of cells, each with a unique address. - A cell is a unit of main memory and stores a fixed number of bits (commonly 8 bits or 1 byte). - **Key Terminology:** - **RAM (Random Access Memory):** Allows random access to any cell. - **SRAM/DRAM:** Variants of RAM with differing technologies - static RAM and Dynamic RAM. - **Addressing:** - Memory cells are addressed sequentially, starting from 0. - Address Space: Determined by the number of bits used to represent an address (e.g., 32-bit architecture = \~4GB addressable memory). **6. Endianness in Memory** - **Byte Ordering:** - **Big Endian:** Stores the most significant byte at the lowest memory address. - **Little Endian:** Stores the least significant byte at the lowest memory address. - **Examples:** - Loading/Storing a 32-bit word (34E84652) involves observing byte orders depending on the architecture. **7. Exercises** - Questions test understanding of: - Memory address and address space. - Calculations involving memory capacity and pixel data. - Endianness through hex-based addressing. **WEEK 3 - LECTURE 2 - REVISION NOTES** **1. Overview of the Lecture** The lecture discusses: - **Instruction encoding and decoding** - **Introduction to a simple machine** - **A simple machine language** **Learning Outcomes:** - Articulate the simple machine architecture. - Explain instruction encoding and decoding. - Understand machine language. **2. Machine Language and Instructions** - **Machine Language**: A set of all instructions encoded as bit patterns, understandable by the CPU. - **Instruction Types**: 1. **Data Transfer**: E.g., Load R1, 10(R2) transfers data. 2. **Arithmetic/Logic**: E.g., Add R3, R1, R2 performs computations. 3. **Control**: E.g., Bnez R1, 10 directs program flow. **3. Instruction Encoding** - Binary instructions are encoded with fields: - **Opcode**: Specifies the operation (e.g., load, add). - **Operands**: Specify data to be used. - **Addressing Mode**: Shows where data is fetched. **4. Simple Architecture** A hypothetical machine includes: - **16 General Purpose Registers (GPRs)**, each 8 bits. - **2 Special Purpose Registers (SPR)**: - Program Counter (PC): 8 bits. - Instruction Register (IR): 16 bits. - **256 Memory Cells** (8 bits each). - **Bus**: 8 bits. All instructions are 16 bits long: - 4 bits for opcode. - 12 bits for operands. **5. Instruction Encoding Example** Instruction 306E is analyzed: - Binary: 0011 0000 0110 1110. - Fields: - Opcode 0011 (store data). - Operand 1: Register R0. - Operand 2: Address 6E. **Decoding Example**: - Decode instruction B258 following the above process. **6. Instruction Set** A small instruction set of 12 operations: 1. **Data Transfer**: Load, Store, Move. 2. **Arithmetic/Logic**: Add, OR, AND, XOR, Rotate. 3. **Control Flow**: Jump, Halt. Opcode and operands are represented in hexadecimal, e.g., A403. **7. Exercises** - Describe encoding and decoding processes. - Identify instruction types. - Understand the architecture and language. - Explore machine architectures like ARM or DLX. **WEEK 4 - LECTURE 1 - REVISION NOTES** **1. Key Topics Covered** - **Instruction Execution Cycles**: - How instructions are fetched, decoded, and executed. - **Learning Objectives**: - Understand how instructions are executed in the CPU. - Learn how data is manipulated during instruction execution. **2. Instruction Execution in the CPU** - **Key Registers**: - **Program Counter (PC)**: Holds the memory address of the next instruction. - **Instruction Register (IR)**: Holds the currently executed instruction. - **Machine Execution Cycle**: - **Instruction Fetch**: Load the next instruction into the IR. - **Instruction Decode**: Decode the opcode and operands. - **Execution**: Perform the operation specified by the instruction. **3. Example: Loading and Adding** - **PC and IR in Action**: - The PC indicates where to fetch the instruction. - The fetched instruction is loaded into the IR. - Example instruction: Load R1, (D7): 1. Fetch instruction from memory using the address in PC. 2. Decode the instruction in IR. 3. Load data from memory (address D7) into register R1. - **Adding Two Values**: - **Steps**: 4. Load first operand into a register. 5. Load second operand into another register. 6. Perform addition using the ALU (Arithmetic Logic Unit). 7. Store the result back into memory. 8. Halt execution. **4. Instruction Fetch and PC Updates** - **Fixed-Length Instructions**: - Instructions are 16 bits (2 bytes). - After fetching an instruction, the PC increments by 2. - **Example State Changes**: - Initial state: PC = A0, fetches the instruction at address A0. - Updates: - PC → A2 after fetching the first instruction. - PC → A4 after the second instruction, and so on. **5. Decoding and Executing an Instruction** - **Decoding Process**: - The first 4 bits (opcode) identify the operation. - Example: Load the content of memory address 0x6C into register R5. - **Execution**: - Fetch data from memory to the specified register. **6. Example Execution Sequence** 1. **Instruction 1**: - Load data from address 0x6C into R5. - PC updates for the next instruction. 2. **Instruction 2**: - Load data from address 0x6D into R6. - PC updates. 3. **Instruction 3**: - Add contents of R5 and R6, store in R0. 4. **Instruction 4**: - Store result from R0 into memory address 0x6E. **7. Final State** - Data in R0 is stored in memory. - PC points to the instruction after the final one, halting the program. **8. Tools and Exercises** - **Virgule Emulator**: - A visual simulator available online to explore RISC-V instruction execution. - URL: [Virgule Emulator](https://eseo-tech.github.io/emulsiV/) - Instruction set documentation: [Virgule Docs](https://eseo-tech.github.io/emulsiV/doc/) - **Exercises**: - Complete formative assessments on data manipulation (provided in the lecture folders). **WEEK 4 - LECTURE 2 - REVISION NOTES** **1. Introduction to Computer Performance** - **Execution Time**: The duration from the start to the completion of a task. - Includes **CPU time** (computation) and **Elapsed time** (waiting for I/O or running other programs). - Example: Comparing two computers: - Computer A: Execution Time (ET) = 20 sec; Performance = 1/20 - Computer B: ET = 60 sec; Performance = 1/60 - **Relative Performance**: Computer A is 3x faster than Computer B. **2. Benchmarks for Performance Measurement** - Benchmarks are programs used to evaluate performance: - **Real Programs**: Applications and compilers. - **Kernels**: Key portions of real programs. - **Benchmark Suites**: Collections of small, standardized programs (10--100 lines). **3. CPU Performance** - CPU time is calculated based on: - **IC (Instruction Count)**: Number of instructions executed. - **CPI (Clock Cycles per Instruction)**: Average cycles per instruction. - **Clock Cycle Time**: Duration of a single clock cycle. **Equations** 1. CPU Time=CPU Clock Cycles×Clock Cycle Time\\text{CPU Time} = \\text{CPU Clock Cycles} \\times \\text{Clock Cycle Time}CPU Time=CPU Clock Cycles×Clock Cycle Time 2. CPU Clock Cycles=IC×CPI\\text{CPU Clock Cycles} = \\text{IC} \\times \\text{CPI}CPU Clock Cycles=IC×CPI 3. CPU Performance=1CPU Time\\text{CPU Performance} = \\frac{1}{\\text{CPU Time}}CPU Performance=CPU Time1 **Dependencies**: - **Clock Rate**: Hardware and organization. - **CPI**: Architecture and instructions. - **IC**: Instruction set and compiler. **4. Amdahl's Law** A critical principle for performance improvements in computer design: - **Speedup (SSS)** quantifies improvement: S=Enhanced TimeOriginal Time S=Original TimeEnhanced TimeS = \\frac{\\text{Original Time}}{\\text{Enhanced Time}} - Factors: - **Fraction Enhanced (FE)**: Portion of the computation that benefits from the enhancement (FE≤1). FE≤1FE \\leq 1 - Example: 40% of a 100s task is enhanced → FE=0.4. FE=0.4FE = 0.4 - **Speedup Enhancement (SE)**: Factor by which the enhanced portion runs faster (SE≥1). SE≥1SE \\geq 1 - Example: An enhanced process takes 4s instead of 40s → SE=10. SE=10SE = 10 **Overall Speedup Calculation:** S=1(1−FE)+FESES = \\frac{1}{(1 - FE) + \\frac{FE}{SE}} S=(1−FE)+SEFE1 **Example**: - CPU enhancement for 40% of time (FE=0.4) with a speedup of 10 (SE=10): S=(1−0.4)+100.41=1.66 FE=0.4FE = 0.4 SE=10SE = 10 S=1(1−0.4)+0.410=1.66S = \\frac{1}{(1 - 0.4) + \\frac{0.4}{10}} = 1.66 **5. Exercises** 1. **Performance Measurement**: - How to use CPU time and benchmark programs to evaluate systems. 2. **CPI and Clock Rate Calculation**: - Analyze instructions and calculate CPU time with a given clock rate. 3. **Amdahl's Law Application**: - Design improvements to achieve a specific system-wide speedup (e.g., increasing CPU speed). **WEEK 5 - LECTURE 1 - REVISION NOTES** **Overview** 1. **Purpose of the Lecture:** - Explore ISA: The interface between software and hardware. - Understand classifications of instruction set architectures. - Learn advantages and disadvantages of various instruction set architectures. 2. **Key Concepts:** - Definition and components of ISA. - Classification of ISA based on how instructions handle data and interact with memory. **What is Instruction Set Architecture (ISA)?** - ISA is the part of a machine that is visible to a programmer or compiler writer. - **Components include:** - **Registers**: Locations for data storage during processing. - **Addressing modes**: How memory locations are accessed. - **Operands and Operations**: Data and functions manipulated by instructions. - Example: - Instruction: Add R1, R2, R3 - Binary Representation: 0101 0001 0010 0011 → 0x5123 - **Breakdown:** - 4 bits for opcode (operation: Add) - 12 bits for operands (registers R1, R2, R3). **CPU Organization for ISA Design** 1. **Key Components:** - **ALU (Arithmetic Logic Unit):** Performs calculations. - **Local Storage:** Includes registers, stack, or accumulator. 2. **Understanding CPU Organization:** - Essential for designing effective instruction sets. **Classification of Instruction Set Architectures** There are four main classes: 1. **Stack Architecture:** - Operands are implicitly on top of the stack. - Example Sequence: - Push A - Push B - Add - Pop C - No explicit registers; memory is accessed via the stack. 2. **Accumulator Architecture:** - A single accumulator register holds one operand. - Example Sequence: - Load A → Accumulator = A - Add B → Accumulator = A + B - Store C → Save result to memory. 3. **Register-Memory Architecture:** - One operand is in memory, and the other in a register. - Example Sequence: - Load R1, A - Add R1, B - Store C, R1 4. **Register-Register Architecture (Load-Store):** - Both operands must be in registers. - Example Sequence: - Load R1, A - Load R2, B - Add R3, R1, R2 - Store C, R3 **Operand Locations Across Architectures** - **Stack:** Operands in stack memory. - **Accumulator:** Operand in the accumulator. - **Register-Memory:** One operand in a register, the other in memory. - **Register-Register:** Both operands in registers. **Comparison of Architectures** **Advantages and Disadvantages:** 1. **Stack:** - **Pros:** Simplifies the instruction set. - **Cons:** Inefficient for modern compilers due to frequent memory access. 2. **Accumulator:** - **Pros:** Simple and compact instructions. - **Cons:** Limited by single operand storage. 3. **Register-Memory:** - **Pros:** Balanced use of registers and memory. - **Cons:** Limited scalability with more complex operations. 4. **Register-Register (Load-Store):** - **Pros:** Efficient for modern processors; minimizes memory access. - **Cons:** Requires more registers and larger instruction size. **Exercises** 1. Define ISA and its components. 2. Compare and contrast Register-Memory and Register-Register architectures. 3. Illustrate the instruction flow for C = A + B in different architectures. 4. Discuss the advantages and disadvantages of the Register-Memory and Register-Register approaches. **WEEK 5 - LECTURE 2 - REVISION NOTES** **1. Memory Addressing** Memory addressing involves the methods by which a computer locates and accesses data in memory. **Key Points:** - **Registers for Memory Addresses**: - Example: The **SP register** (stack pointer) holds the memory address of the beginning of the stack. - This concept is applied in lab practicals for ARM architecture. - **Endianness**: - Memory can be addressed using: - **Little Endian**: Least significant byte stored first. - **Big Endian**: Most significant byte stored first. - Alignment of memory is crucial, as it ensures data is accessed correctly. **Memory Alignment Example:** - A data structure struct data { char A, B; int C; char D, E; } takes **10 bytes** due to padding for alignment. - By rearranging: struct data { char A, B, D, E; int C; }, the structure can take **8 bytes** (less padding). **2. Addressing Modes** Addressing modes define how the instruction specifies the memory address of an operand. **Three Addressing Modes:** 1. **Displacement Addressing**: - Combines a **register** value and a **constant displacement**. - Example: Add R4, 100(R1) where: - Register R1 = 8. - Target memory address: 100 + 8 = 108. - If Reg\[R4\] = 20 and Mem\[108\] = 50, after execution, R4 = 20 + 50 = 70. 2. **Immediate Addressing**: - Operand is a **constant value** included in the instruction. - Example: Add R4, \#3 where: - Register R4 initially has 20. - Operand is constant 3. - After execution, R4 = 20 + 3 = 23. 3. **Register Indirect Addressing**: - Uses a **register** value as the memory address. - Example: Add R4, (R1) where: - Register R1 = 8. - Memory at address 8 has 30. - Register R4 initially has 20. - After execution, R4 = 20 + 30 = 50. **Importance of Addressing Modes:** - They allow flexibility in accessing operands. - Most programs use these modes (75-99% of addressing cases). **3. Relation to Instruction Length** Instruction length depends on: 1. **Displacement Field Size**: - Affects the number of bits needed for displacement. - Example: 12-16 bits for displacement capture most values (75-99%). 2. **Immediate Field Size**: - Determines the size of constants directly encoded in instructions. - Example: 8-16 bits for immediate fields capture 50-80% of cases. **4. Operands in Instruction Sets** - **Type and Size**: - Examples of data types and their sizes: - character: 8 bits. - integer: 32 bits. - double word: 64 bits. - Media/DSP (Digital Signal Processing) examples: - 32-bit floating point for 3D graphics. - 32 bits with four 8-bit channels for 2D images. **Why Data Types Matter:** - Data types define operand sizes and affect instruction length. - Different architectures (e.g., desktop, DSP) optimize for specific operand types. **5. Exercises and Questions** The presentation concludes with exercises to reinforce concepts: 1. **Addressing Mode Usage**: - How different instructions use displacement, immediate, and register indirect addressing. - Example: Subtraction instructions using R3 with different memory configurations. 2. **Data Types**: - Understanding the impact of data type definitions on program and instruction length. **Key Takeaways:** - Understanding memory addressing and instruction design is fundamental for optimizing architectures. - Addressing modes balance flexibility, efficiency, and complexity. - Operand types and instruction lengths are interdependent and essential for system architecture. **WEEK 7 - LECTURE 1 - REVISION NOTES** **1. Overview of the Lecture** The lecture is structured around: - **Operations in Instruction Sets** - **Encoding Instruction Sets** - **Comparing Encoding Methods** The learning outcomes are: 1. Understanding common operations in instruction sets. 2. Describing instruction encoding processes. 3. Comparing encoding methods (variable, fixed-length, hybrid). 4. Identifying desirable features in instruction set design. **2. Operations in Instruction Sets** - **Categories of Operations**: Common operations include arithmetic, logical, data transfer, and control instructions. - **Key Observations**: - The **most frequent instructions** tend to be the simplest. For example, in Intel 80×86, the top 10 operations include load, conditional branch, compare, store, and add. - **Control flow instructions** are divided into: - **Conditional Branch**: 75% usage (82% for floating-point operations). - **Procedure Call/Return**: 19% (8% for floating-point). - **Jump**: 6% (10% for floating-point). **For Media and DSPs (Digital Signal Processors):** - Media operations often involve **single-precision floating point** or **integer data types**. - Techniques like **Single Instruction Multiple Data (SIMD)** are employed for parallel processing. - DSPs emphasize real-time operations, such as: - **Arithmetic and Logical** - **Data Transfer** - **Control** **3. Instruction Encoding** Instruction encoding converts operations, addressing modes, and operands into binary representations that CPUs can execute. **Influencing Factors:** 1. **Number of Registers**: More registers simplify programming but increase encoding complexity. 2. **Addressing Modes**: More modes offer flexibility but impact instruction size. Example: plaintext Copy code Add R1, (R2) This demonstrates the combination of operations, registers, and memory addressing. **4. Encoding Methods** Three primary types of encoding methods are discussed: **Variable Length Encoding** - **Definition**: Instruction sizes vary. - **Advantages**: - Supports all addressing modes. - More memory-efficient. - **Disadvantages**: - Complex decoding. - **Example**: Intel 80×86 (instruction size ranges from 1 to 17 bytes). VAX example: plaintext Copy code add13 R1, 737(R2), (R3) - Opcode specifies addition (32-bit integers). - Operands and addressing modes encoded with different bit lengths. **Fixed Length Encoding** - **Definition**: All instructions have the same size (e.g., 32 bits). - **Advantages**: - Easier to decode. - Simplifies pipeline processing. - **Disadvantages**: - Larger code size due to unused bits. - **Example**: DLX Processor - Add R3, R1, R2 is a 32-bit instruction with fixed allocation for opcode and operands. **Hybrid Encoding** - **Definition**: Combines features of variable and fixed encoding. - **Advantages**: - Balances performance and flexibility. - Reduces code size variability. - **Examples**: IBM 360/370, MIPS16. **5. Fixed vs. Variable Length Encoding** - **Fixed Length**: - Simpler decoding. - Larger code size. - **Variable Length**: - More compact and memory-efficient. - Decoding is more complex. - Pipeline efficiency can suffer due to size variability. **6. Desirable Features in ISA Design** - Use **general-purpose registers**. - Employ **load-store (register-register) architecture**. - Minimize the number of **addressing modes**. - Simplify instructions for performance. - Use **hybrid encoding** for efficiency and flexibility. - Prefer a **reduced instruction set (RISC)** approach. **7. Questions and Exercises** - Identify popular operations in instruction sets. - Describe the process of instruction encoding. - Compare fixed and variable-length encoding with examples. - Explain how addressing modes are implemented in instruction encoding. **WEEK 7 - LECTURE 2 - REVISION NOTES** **1. DLX Architecture Overview** - **Type:** A simplified load-store architecture modeled after MIPS. - **Purpose:** Efficient for pipelining and as a compiler target due to fixed-length encoding and ease of decoding. - **Registers:** - **32 General-Purpose Registers (GPRs):** Named R0 to R31, where R0 always holds 0. - **32 Floating-Point Registers (FPRs):** Single-precision F0--F31, and even-odd pairs for double precision (e.g., F0, F2 for 64-bit values). - **Data Types:** - 8-bit bytes, 32-bit integers. - 32-bit single-precision and 64-bit double-precision floating-point. - **Addressing Modes:** - **Immediate Addressing:** Using a 16-bit immediate field. - **Displacement Addressing:** Combines a register and a displacement value. - **Register-Indirect:** Set displacement to 0. - **Absolute Addressing:** Use R0 as the base register. **2. Instruction Formats** DLX uses a fixed 32-bit instruction format, which facilitates pipelining and decoding. **A. I-Type (Immediate Type)** - **Purpose:** Load/store, branches. - **Fields:** Opcode (6 bits), rs1 (source register), rd (destination register), immediate (16 bits). - **Examples:** - **Load/Store:** LW R2, 250(R1) (Load word). - **Arithmetic:** ADDI R2, R1, \#100 (Add immediate). - means take contents from R1 + 100 and store in R2 - **Branch:** BNEZ R1, name (Branch if not zero). - if contents in r1 not equal to zero then jump. - **Jump Register:** JR R3 (Unconditional jump). **B. R-Type (Register Type)** - **Purpose:** ALU operations, register reads/writes. - **Fields:** Opcode (6 bits), rs1, rs2 (source registers), rd (destination register), func (operation). - **Examples:** - ADD R3, R1, R2 (Add values in R1 and R2, store result in R3). **C. J-Type (Jump Type)** - **Purpose:** Control flow (jumps, function calls). - **Fields:** Opcode (6 bits), immediate (26 bits for target address). - **Examples:** - J name (Jump unconditionally). - JAL name (Jump and save return address in link register R31). **3. Operations in DLX** 1. **Load/Store:** Moving data between memory and registers. 2. **Arithmetic/Logical:** Adding, subtracting, shifts, etc., entirely in registers. 3. **Control Flow:** Branches and jumps, including conditional and unconditional. 4. **Floating-Point:** Arithmetic operations with FPRs. **4. Addressing Mode Examples** - **Displacement Mode:** LW R1, 400(R2) (Load R1 from address R2+400). - **Register Indirect:** LW R1, (R2) (Displacement is 0). - **Absolute Addressing:** LW R1, 400(R0) (R0 holds 0). **5. End-of-Lecture Exercises** - Interpret the instructions: - **LW R2, 100(R3):** Load value at address R3+100 into R2 (displacement addressing, I-type). - **SD 100(R3), F6:** Store double-precision value from F6 to address R3+100. - **ADD R1, R2, R3:** Add values in R2 and R3, store result in R1 (R-type). - **ADDI R1, R2, \#10:** Add 10 to R2, store in R1 (I-type). - **BEQZ R5, 100:** Branch to address PC+100 if R5 equals 0 (I-type). - **JR R2:** Jump to address in R2 (I-type). **WEEK 8 - LECTURE 1 - REVISION NOTES** **1. Overview of the Lecture** The topic spans four lectures (weeks 8-9) and covers: - **Introduction to Pipelining** - **Basic DLX Pipeline** - **Performance Issues in Pipelining** - **Pipeline Hazards**: - Data hazards - Structural hazards - Control hazards - **Handling Multi-Cycle Operations in the DLX Pipeline** - **MIPS R4000 Pipeline** (self-learning) **2. Instruction Execution Cycle** The execution cycle of an instruction is divided into **five stages**: 1. **Instruction Fetch (IF)**: - Fetch instruction from memory (e.g., IR \

Use Quizgecko on...
Browser
Browser