Computer Systems and Architecture (Unit I) - PDF
Document Details

Uploaded by SharperNovaculite2865
Amrita Vishwa Vidyapeetham
Tags
Related
- Computer Organization and Architecture PDF
- (The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-24-101-1-9 copy.pdf
- Computer Organization and Architecture Lecture Notes PDF
- COA Mod4 Notes PDF
- Organization Of Computer Systems Chapter Three PDF
- Fondamenti di Informatica - Architettura degli elaboratori PDF
Summary
This document covers the fundamental concepts of Computer Systems and Architecture (CSA) for Unit I. It delves into the key elements such as computer architecture, computer organization, and the evolution of computers, including different generations and also covers basic components like input/output devices.
Full Transcript
Computer Systems and Architecture Sub Code: 23CCE213 Books Textbook(s) 1. V. Carl Hamacher, Zvonko G. Varane sic and Safat G. Zaky, “Computer Organisation”, Fifth edition, Indian Edition, McGraw-Hill Education, 2011. 2. Patterson DA, Henne...
Computer Systems and Architecture Sub Code: 23CCE213 Books Textbook(s) 1. V. Carl Hamacher, Zvonko G. Varane sic and Safat G. Zaky, “Computer Organisation”, Fifth edition, Indian Edition, McGraw-Hill Education, 2011. 2. Patterson DA, Hennessy JL. Computer Organisation and Design, The Hardware/Software interface (ARM Edition). Fourth Edition, Morgan Kaufmann; 2010. References(s) 1. Hennessy J L, Patterson DA. Computer architecture: a quantitative approach. Fifth Edition, Morgan Kauffmann; 2011. 2. Behrooz Parham, “Computer Architecture”, Indian Edition, Oxford University Press, 2012. 3. John P. Hayes, “Computer Architecture and Organisation”, Indian Edition, McGraw-Hill Education, 2017. 4. Stallings W. Computer Organisation and Architecture. Tenth Edition, PHI; 2016. Evaluation pattern Assessment Internal/External Weightage Continuous Assessment Internal 30 Mid-term exam Internal 30 Term project/End semester exam External 40 UNIT-I Introduction to computer system Usage of basic digital blocks Floating point number IEEE single precision and double precision representation Floating point arithmetic Floating point adder/Subtractor Addressing modes with examples Data path and controller design Single bus dataflow unit - Multi bus architecture Computer architecture Is the conceptual design and fundamental operational structure of a computer system. Computer architecture is a design and organization of computer systems. It's essentially the blueprint for how a computer is built, including the hardware, software, and firmware that make up the system. Computer organization Is the operational units and their interconnections within a computer system Describing how the hardware components work together to perform computations. It focuses on the structure, behavior, and design of a computer's physical components and their functional integration. Introduction to computer system Introduction to computer system How is a computer defined? The computer can be defined in multiple ways. 1. Computer is a fast electronic calculating machine that accepts digitized input information, processes it according to a list o f internally stored instructions, and produces the resulting output information. The list of instructions is called a computer program, and the internal storage is called computer memory. 2. Electronic device operating under the control of instructions stored in its own memory. 3. A Computer is programmed device with a set of instructions to perform specific tasks and generate results at a very high speed. All modern computers function on the same general model of input, process and output. 4. A computer is a programmable device that stores, retrieves, and processes data. Basic Architecture of Computer What is an input device? What is an output device? Hardware used to enter Hardware that conveys data and instructions information to user Evolution of computers (i) First Generation: 1940-1956 Vacuum tubes Slower in processing speed and used machine language Bigger in size even occupied entire room, Generates lot of heat and very expensive Can’t be used continuously for longer duration of time. The main features of first generation are: Unreliable Slow input and output devices Huge size Need of A.C. Non-portable Consumed lot of electricity Some computers of this generation were: ENIAC, EDVAC, UNIVAC Electronic Numerical Integrator And Computer (ENAIC) Electronic Discrete Variable Automatic Computer (EDVAC) UNIVersal Automatic Computer (UNIVAC) (ii) Second Generation: 1956-1963 Transistors Cheaper, consumed less power, more compact in size, more reliable and faster. Magnetic cores were used as primary memory Magnetic tape and magnetic disks as secondary storage devices. Assembly language and high-level programming languages like FORTRAN, COBOL was used. The computers used batch processing (is a method of running software programs called jobs in batches automatically.) and multiprogramming operating system. The main features of second generation are: Reliable in comparison to first generation computers Smaller size as compared to first generation computers Generated less heat as compared to first generation computers Consumed less electricity as compared to first generation computers Faster than first generation computers Still very costly A.C. needed Supported machine and assembly languages Some computers second generation were: IBM 1620 IBM 7094 CDC 1604 CDC 3600 UNIVAC 1108 Third Generation: 1964-1971 Integrated Circuits (IC's) A single IC has many transistors, resistors and capacitors along with the associated circuitry. The IC was invented by Jack Kilby. This development made computers smaller in size, reliable and efficient. In this generation remote processing, time-sharing, multi- programming operating system were used. High-level languages (FORTRAN-II TO IV, COBOL, PASCAL PL/1, BASIC, ALGOL-68 etc.) The main features of third generation are: More reliable in comparison to previous two generations Smaller size Generated less heat Faster Lesser maintenance Still costly A.C needed Consumed lesser electricity Supported high-level language Some computers of third generation were: IBM-360 series Honeywell-6000 series PDP(Personal Data Processor) IBM-370/168 TDC-316 (iv) Fourth Generation: 1971-1980 Microprocessors Very Large Scale Integrated (VLSI) circuits. VLSI circuits having about 5000 transistors and other circuit elements and their associated circuits on a single chip. Computers became more powerful, compact, reliable, and affordable. In this generation time sharing, real time, networks, distributed operating system were used. All the high-level languages like C, C++, DBASE etc. Time sharing: Type of operating system which enables multiple people to use the same computer simultaneously. The main features of fourth generation are: Very cheap Portable and reliable Use of PC's Very small size No A.C. needed Concept of internet was introduced Great developments in the fields of networks Computers became easily available Some computers of fourth generation were: DEC 10 STAR 1000 PDP 11 CRAY-1(Super Computer) CRAY-X-MP(Super Computer) (V) Fifth Generation: 1981-Till date ULSI (Ultra Large-Scale Integration) technology, resulting in the production of microprocessor chips having ten million electronic components. This generation is based on parallel processing hardware and AI (Artificial Intelligence) software. AI is an emerging branch in computer science, which interprets means and method of making computers think like human beings. All the high-level languages like C and C++, Java,.Net etc., Focus on higher clock speeds and performance, becoming popular in desktops and workstations. Huge development in storage capacity with faster and larger storage Parallel processing: Use of multiple processors to divide the workload and minimize the computation time of process. The main features of fifth generation are: Robotics Game Playing Development of expert systems to make decisions in real life situations. Development of true artificial intelligence Development of Natural language processing Advancement in Parallel Processing Advancement in Superconductor technology More user-friendly interfaces with multimedia features Some computers of this generation were: Desktop Laptop Notebook Ultrabook Chromebook Hardware vs. Software Computers are composed of two fundamental components: Hardware and Software. Both hardware and software are essential components of any computing system, including computers, mobile devices, and embedded systems. They serve different roles but work together to perform tasks. Hardware: Is any part of your computer that has a physical structure. Eg: CPU, motherboard, monitor, memory, and storage devices, keyboard or mouse. Software: Is any set of instructions that tells the hardware what to do and how to do it. (Virtual and intangible ) Eg: Operating systems (Windows, macOS, Linux), applications(Photoshop, Excel), games, utilities. Hardware components of Computer Mother board Ports and Interface Central processing unit (CPU) Wireless connectivity Memory (RAM) Web came and Microphone Storage Speakers Display Cooling system Battery Optical drive Graphics processing unit Security features Keyboard and Touchpad 1. Motherboard: The backbone of the system, connecting all the components and providing a platform for them to communicate. 2. Central Processing Unit (CPU): The CPU is the brain of the computer, responsible for executing instructions, performing calculations, and running software applications. It determines the computer's processing power and performance. 3. Memory: Hardware component used to store data, instructions, and information either temporarily or permanently. Facilitating communication between the CPU and storage devices. Memory plays a crucial role in a computer system as it allows the processor to access and manipulate data efficiently. (i) Primary Memory (Volatile): It is commonly used for temporary storage memory in computing systems. Eg: RAM (Random Access Memory) Stores data and instructions currently in use. Directly accessible by the CPU. It provides temporary storage for data and programs during processing. Having more RAM allows for smoother multitasking. Volatile memory: A type of computer memory that requires power to maintain the stored information. When the power is turned off, the stored data is quickly lost. (ii) Secondary Memory (Non-Volatile): Firmware: Form of microcode Contains pre-installed instructions, including firmware. or program embedded into hardware devices to help them Long-term data storage. operate effectively. Not directly accessible by the CPU; data is transferred to RAM when needed. Eg: ROM (Read only Memory) Hard Disk Drive (HDD): Traditional spinning hard drives offer large storage capacities but are slower compared to solid-state drives. Solid-State Drive (SSD): SSDs are faster and more reliable than HDDs. They provide faster boot times and application loading but typically come in smaller capacities. 4. Display: The computer's display is the visual interface. Computer use various display technologies, including LCD, LED, and OLED (Organic LED), with different resolutions and sizes. 5. Battery: A rechargeable battery powers computer, allowing them to be used without a constant external power source. Battery life varies based on usage and computer model. 6. Graphics Processing Unit (GPU): Dedicated GPUs, like those from NVIDIA or AMD, enhance graphics performance for tasks like gaming, video editing, and 3D rendering. Some computers also use integrated graphics within the CPU. 7. Keyboard and Touchpad: These input devices allow you to interact with the laptop. Some (computers) laptops may include a touchscreen display. 8. Ports and Interfaces: Laptops have various ports for connectivity, including USB ports, HDMI, audio jacks, and more. The availability and types of ports may vary depending on the laptop's design. 9. Wireless Connectivity: Laptops include Wi-Fi and Bluetooth capabilities for wireless internet access and peripheral connectivity. 10. Webcam and Microphone: Most laptops have built-in webcams and microphones for video conferencing and online communication. 11. Speakers: Integrated speakers provide audio output for multimedia and conferencing. 12. Cooling System: Laptops have a cooling system, including fans and heat sinks, to dissipate heat generated by the CPU and GPU during operation. 13. Optical Drive: Many modern laptops no longer include optical drives, but some models still have DVD or Blu-ray drives for reading and writing optical discs. 14. Security Features: Laptops may include security features like fingerprint readers, face recognition, or Trusted Platform Modules (TPMs) for enhanced data protection. Usage of basic digital blocks A computer consists of four functionally independent main parts: (i) Input unit (ii) Output Unit (iii) Storage/Memory unit (iv) CPU Usage of basic digital blocks (i) Input Unit: Allow users to enter data and commands into the computer system. Convert the data from electrical signals to computer-readable machine language. Instruct the CPU to receive data from the input devices Supply the converted data to the computer system for further processing. Examples include keyboards, mouse, scanners, and microphones (ii) Output Unit: Convert the machine language into electronic signals readable by the output devices. Output devices display or present processed information to the user. Eg: Monitor, Printer, Speakers Headphones, Projector, GPS and Plotter are some output devices of computer (iii) Storage/Memory unit: The data received from the input unit is stored in the memory unit. Data and information are passed to the CPU for further processing. Variable stores any data or instructions created by the CPU during intermediate processing. After that, the variable stores the final result of data processing in the CPU. Finally, sends the processed data results to the output devices. It also stores data and information for future use. The Memory Unit is divided into two categories:- (i) Primary Memory (ii) Secondary memory (i) Primary Memory: Internal storage. The primary memory is the most quickly accessible memory unit. Eg: RAM, ROM (ii) Secondary memory: Secondary or external storage is inaccessible directly to the CPU Data and information transmission and reception are slower than in primary memory. Examples include hard disk drives (HDDs), solid-state drives (SSDs), compact disks (CDs), Pen Drives, etc. (V) Central processing unit (CPU): The Central Processing Unit (CPU) is the location where programs are executed. The CPU includes registers, an arithmetic logic unit (ALU), and control unit (CU) CPU is responsible for executing instructions, performing calculations, and managing data processing tasks Interpret and execute assembly language instructions. CPU interacts with all the other parts of the computer architecture to make sense of the data and deliver the necessary output. It consists of three main components: Registers Arithmetic Logic Unit (ALU) Control Unit (CU) Registers: CPU registers are small, fast storage locations within the CPU of a computer. Used to store data and instructions temporarily during the execution of programs. Registers provide faster access to data than main memory or cache, significantly speeding up processing. Registers store intermediate data that the CPU processes during arithmetic or logical operations. Registers temporarily hold the instructions fetched from memory before execution. Registers store memory addresses pointing to specific data or instructions in the main memory. Arithmetic logic unit ALU: Includes the electrical circuitry that performs any arithmetic and logical processes on the supplied data. It is used to execute all arithmetic (additions, subtractions, multiplication, division) and logical (, AND, OR, etc.) computations. Registers are used by the ALU to retain the data being processed. Control Unit (CU): Coordinates and controls other functional units of the computer It passes the data from the memory unit to the ALU When the computation is completed and the results are created by the ALU, the CU returns the computation data to the memory unit. Fixed and Floating-Point Representation There two types of approaches that are developed to store (in memory locations) real numbers (non integer: numbers with a fractional part) with the proper method. Fixed point number Floating point number Fixed point number: way of representing real numbers in binary or digital systems where the decimal (or binary) point is fixed at a specific position. This representation has fixed number of bits for integer part and for fractional part. A fixed-point number, is a positive or negative whole number with a decimal point. Fixed point representation Sign bit -A positive number has a sign bit 0, while a negative number has a sign bit 1. Integral Part – The integral part is of different lengths at different places. It depends on the register's size, like in an 8-bit register, integral part is 4 bits. Fractional part – Fractional part is also of different lengths at different places. It depends on the register's size, like in an 8-bit register, integral part is of 3 bits. 8 bits = 1 Sign bit + 4 bits(integral) + 3bits (fractional part) 16 bits = 1 Sign bit + 9 bits(integral) +6 bits (fractional part) 32 bits = 1 Sign bit + 15 bits(integral) + 9 bits (fractional part) Fixed point representation Example: Represent 4.5 in fixed point representation Step 1:- Convert the number into binary form. 4.5 = 100.1 Step 2:- Represent binary number in Fixed point notation The smallest negative number in fixed-point representation. Smallest negative number = -15.875 The largest number in fixed-point representation. Larger number = +15.875 Limitations: Range of fixed-point notation is from -15.875 to +15.875. The fixed-point notation range is very less as we can only represent the number in a set limit. It is not suitable for presenting a large amount of data. Precision is limited (Precision: how close the stored value is to the actual value, depends on no of bits used for fractional bits) Floating point number A floating-point number is a way to represent real numbers (Non integer: numbers with a fractional part)) in computing. A floating-point number, is a positive or negative whole number with a decimal point. The term "floating-point" refers to the fact that the decimal point can "float" or be positioned anywhere within the number, enabling the representation of both very large and very small numbers. For example: 3.145, 12.99, and -234.9876 are all floating-point numbers since the decimal point is not always in the same position. These numbers are stored in a format that allows for a wide range of values Eg: Very large numbers 3.8×10100 Very small numbers 3.8×10-100 Why do we need floating-point numbers in computing? Floating-point numbers are essential in computing because (i) Representing real numbers: They enable us to work with real-world values that are not whole numbers. Many scientific, engineering, and financial calculations require precise representation of decimal numbers with varying levels of precision. Floating-point numbers allow us to perform these calculations accurately and efficiently. (ii) Wide range of values: Floating-point numbers can represent a vast range of values, from Extremely small numbers (e.g., the mass of an electron) to Extremely large numbers (e.g., the distance between galaxies). whole numbers: No negatives, no fractions, Eg: 0, 1, 2, 3, 4,……. How floating-point numbers represented in computers? The floating-point notation has two types of notation 1. Scientific notation : Method of representing binary numbers into a x be form. IEEE 754 Standard 2. Normalized notation (Fixed and single representation) The result said to be normalized, if it is represented with leading 1 bit The floating-point numbers are to be represented in normalized form. A normalized number provides more accuracy than corresponding de-normalized number. IEEE 754 Standard Floating-point numbers are represented using a standardized format known as the Institute of Electrical and Electronics Engineers (IEEE) floating-point standard. This standard specifies how the numbers are encoded in binary format, consisting of a sign bit, an exponent, and a significand or mantissa. The sign bit determines the positive or negative nature of the number, Exponent represents the magnitude (Integer) part, and Mantissa (Significand) stores the fractional part. According to IEEE 754 standard, the floating-point number is represented as S: Sign bit (0 for positive, 1 for negative) M: Mantissa or significand (fractional part) E: Exponent (scaling factor– Increase the range) Base=Radix Value = Sign × Significand × Base Exponent N = ± Significand × Base ± Exponent 4.7988 = 0.0047988 * 10+3 12.345=0.012345*10+3 (228)10 = (11100100)2 = (1.11001)2 × 27. Eg: 7.125(10) = 111.001(2) The result can be written as Note: (111.001)2= 0.0111001*24 The primary issue with this notation is that, when storing (111.001)2= 0.111001*23 the mantissa, we must always specify the decimal position to the processor. (111.001)2= 1.11001*22 So, to overcome this problem, normalized notation was (111.001)2= 111001*2-3 invented and used. (111.001)2= 11100.1*2-2 The same number could have multiple representations, leading to ambiguity To eliminate this ambiguity, we will apply normalization. A single unique format avoids ambiguity. Normalization Need of Normalization: (Fixed and single representation) 1. Ensures a Unique Representation: Numbers are stored in a consistent and efficient format, improving precision and reducing redundancy. 2. Maximizes precision Normalization ensures the most significant bit (MSB) is 1, using all available bits for precision. 3. Optimizes storage and computation It helps fit numbers within a standard format like IEEE 754, allowing efficient arithmetic operations. 4. Prevents loss of significant digits Shifting the decimal point to the correct position preserves important digits and avoids unnecessary leading/trailing zeros. Types of Normalization 1. Explicit normalization (0.M Normalization): Move the radix point to the LHS of the MSB “1” in the bit sequence (111.001)2 = 0.111001*23 (-1)S * (0.M) * Base (E-bias) 2. Implicit normalization (1.M Normalization): Move the radix point to the RHS of the MSB “1” in the bit sequence (111.001)2 = 1.11001*22 (-1)S * (1.M) * Base (E-bias) Implicit normalization will give the more precision compared with the explicit normalization. 3. 0.1M Normalization: (a) Integer part should be zero (111.001)2 = 0.111001*23 (b) 0. b1. b2. b3…….. then b1>0 (-1)S * (0.1M) * Base (E-bias) Implicit normalized notation In the normalized form, there is only a single non-zero digit before the radix point. Normalized means after the radix point, we have at least one non-zero digit. Eg: 4.5(10) = 100.1(2) The result said to be normalized, if it is represented with leading 1 bit, i.e. 1.001(2) x 22. −53.5 is normalized as -53.5 = (-110101.1)2 = (-1.101011)x25 (-1)S * (1.M) * Base (E-bias) Explicit Vs Implicit normalization 1.M Notation 0.1 M Notation 0.M Notation Represent 1500 in normalized form Write 1500 in 0.1 M notation 500= 0.5*103 1.5*102 = 1500 0.15* 104 =1500 500=0.05*104 χ 0.15* 103 = 1500 χ 0.015* 105 = 1500 500=0.005*105 1.15* 103 = 1150 500 = 0.0005*106 1.05* 103 = 1050 Only possibility to get 1500 is 0.15*104 = 1500 Only possibility to get 1500 is 1.5*102 =1500 1 is implicit bit Different sizes of floating-point numbers The most common sizes are Single precision (32 bits) and Double precision (64 bits). There are also extended precision formats that use even more bits to store floating-point numbers Half Precision (16 bit): 1 sign bit, 5 bits exponent, and 10 bits mantissa Single Precision (32 bit): 1 sign bit, 8 bits exponent, and 23 bits mantissa Double Precision (64 bit): 1 sign bit, 11 bits exponent, and 52 bits mantissa Quadruple Precision (128 bit): 1 sign bit, 15 bits exponent, and 112 bits mantissa IEEE 754 Formats 1. Single Precision (32 bit): 2. Double Precision (64 bit): Biasing Bias is a crucial part of floating-point representation, enabling efficient handling of a wide range of numbers The offset applied to the exponent to allow representation of both very small and very large numbers Bias is a fixed value determined by the floating-point format. Biasing makes it easier to compare the magnitude of floating-point numbers Bias = 127 for 32 bit conversion. (28-1 -1 = 128-1 = 127) ∴ 28= +/- 256 (0-255) 2(8-1) to 2 (8-1) -1 Bias = 1023 for 64 bit conversion. (211-1 -1 = 1024-1 = 1023) -128 to + 127 Instead of storing the actual exponent (𝐸), we store a biased exponent E biased E biased = E + Bias Excess 127 Offsetting the Exponent: The actual exponent value is added to the bias value before being stored in the exponent field. E biased = E + Bias Retrieving the Exponent: When the floating-point number is interpreted, the bias value is subtracted from the stored exponent to get the actual exponent. E = E biased - Bias Single precision Double precision -2(8-1) to 2 (8-1) -1 -2(11-1) to 2 (11-1) -1 -128 ≤ n ≤ 127. -1024 to 1023 (i) Single Precision (32-Bit Floating-Point Number) Single precision allows for a larger range of numbers, low precision Suitable for many general-purpose applications where high precision is not critical. 1 bit for sign 8 bits for exponent 23 bits for mantissa The most significant bit is sign of the number, 0 indicates positive and 1 indicates negative. Sign Bit 1- Bit Long Determines the +ve or -ve number, 1 = -ve Number 0 = +ve Number Exponent Field 8 - Bits Long Determines the range of numbers that can be represented Increasing the bits will increase the range , Not Precision To cover for -ve numbers , Biased exp = 127 + real exp Mantissa Field 23 - Bits Long Determines the precision of numbers Increasing bits will increase precision, Not range. (i) Double Precision (64-Bit Floating-Point Number) Provides significantly higher precision and a much larger range compared to single-precision. Larger size generally means higher precision and a wider range. The most commonly used type, offering a good balance of precision and performance for most scientific and engineering applications. used where high precision required. 1 bit for sign 11 bits for exponent 52 bits for significand 2(8-1) to 2 (8-1) -1 -1024 to *1023 Example: Convert (-7.75)10 in single precision format Step-1: Convert to binary -7.75 = (−111.11)2 E Step-2: Normalized scientific notation = -1.1111×22 Step-3: Compute biased exponent E biased = E +Bias Offsetting the Exponent: The actual exponent value is added to the bias 210+12710=12910 value before being stored in the convert E biased to binary =(10000001)2 exponent field. Step-4: write components in format: sign exponent mantissa 1 10000001 11110000000000000000000 Single precision format to floating point representation, then convert it into decimal notation 1 10000001 11110000000000000000000 Step-1: (-1)S * (1.M) * Base (E-bias) Note: E = E biased - Bias Retrieving the Exponent: When the Step-2: Retrieving Exponent floating-point number is interpreted, the (-1)1 * (1.1111)* 2 (129-127) bias value is subtracted from the stored exponent to get the actual exponent. Step-3: (-1)1* (1+1.5+0.25+0.125.0.0625)*22 Step-4: - 1.9375*22 = -7.75 Special Purpose Registers Instruction Register (IR): Holds the current instruction being executed. Program Counter (PC): Its primary function is to keep track of the memory address of the next instruction that the CPU needs to execute. Stack Pointer (SP): Points to the top of the stack. Floating-Point Registers: Designed for storing floating-point numbers and supporting mathematical computations. Machine Cycle (or) instruction cycle It is the sequence of steps a computer's central processing unit (CPU) takes to process instructions and perform tasks. The CPU follows a series of stages in order to retrieve, decode, and carry out a command. Is the process the CPU follows to execute a single instruction in a program. It has three phases: (i) Fetch the instruction from the memory: CPU fetches the next instruction from memory, based on the address stored in the Program Counter (PC). The instruction is fetched from memory and stored in the Instruction Register (IR). (ii) Decode the instruction: (Assemble language to Machine language) The Control Unit (CU) analyzes the binary code in the instruction to determine: Opcode: The operation to be performed (e.g., add, subtract, load, store). Operands: The data or memory addresses the operation will involve. (iii) Execute the instruction: Based on the decoded instruction, the CPU generates control signals to initiate the execution phase. Instruction Formats Format: 1. Zero Address Instruction Op-Code Mnemonic Destination, Source 2. One Address Instruction Op-Code Address 3. Two Address Instruction Op-Code Address-1 Address-2 4. Three Address Instruction Op-Code Address-1 Address-2 Address-3 Zero Address Instructions One Address Instructions PUSH A TOP = A LOAD A AC