Computer Architecture and Organization PDF
Document Details
Tags
Related
- Computer-Organization-and-Architecture.pdf
- (The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-24-101-9-11.pdf
- 04.-Computer-organization-Architecture_.pdf
- (The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-102-258.pdf
- Computer Architecture and Organization PDF
- Lecture 2: Computer Science 2520 - Standards & History
Summary
This document provides an overview of computer architecture and organization, covering topics such as computer systems architecture, computer organization, and factors affecting computer design. It also discusses computer technologies and relevant tools.
Full Transcript
**COMPUTER ARCHITECTURE AND ORGANIZATION** **COMPUTER SYSTEM ARCHITECTURE** **ARCHITECTURE** - The practice of building design and its resulting products - Customary usage refers only to those design and structures that are culturally significant **ARCHITECTURE (Computer Science)** -...
**COMPUTER ARCHITECTURE AND ORGANIZATION** **COMPUTER SYSTEM ARCHITECTURE** **ARCHITECTURE** - The practice of building design and its resulting products - Customary usage refers only to those design and structures that are culturally significant **ARCHITECTURE (Computer Science)** - A general term referring to the structure of all or part of a computer system - Covers the design of system software, such as the OS - The combination of hardware and basic software that links the machines on a computer network **COMPUTER SYSTEMS ARCHITECTURE** - Refers to an entire structure and to the details needed to make it functional - Covers computer systems, microprocessors, circuits, and system programs - Implies how the elements of a computer fit together **COMPUTER ORGANIZATION** - Represents the implementation of the computer architecture - Refers to the computer's implementation in terms of its actual hardware **FACTORS AFFECTING THE COMPUTER DESIGNER** - **Architecture** - **System** - **Organization** - **Tools** - Refer to a range of software products, from packages that perform hardware design at circuit level, to computer simulator, to suites of programs used as benchmarks or test cases to compare the speeds of different computers - **Applications** - Refer to the end users of the computer - **Technology** - Indicates the importance of the processes used to manufacture computer components **COMPUTER TECHNOLOGIES** - **Application** - **Buses** - Affects the performance of a computer in terms of structure, organization, and control - **Magnetic Materials** - Used for hard disks - **Peripherals** - **Optical Technology** - Used for CD-ROMS, DVD, Blu-ray drives and network links - **Semiconductor Technology** - Used to fabricate the processor and its main memory - **Device Technologies** - Determine the speed of the computer and the capacity of its memory system **COMPUTER ARCHITECT** - Determines what attributes are important for a new computer, then design a computer to maximize performance while staying within cost, power, and availability constraints **TASKS OF A COMPUTER ARCHITECT** - Instruction set design - Functional organization - Logic design - implementation **RELEVANT TOOLS, STANDARDS, AND/OR ENGINEERING CONSTRAINTS** 1. Design Goals 2. Cost 3. Performance 4. Power Consumption 5. Instruction Set Architecture **DESIGN GOALS** - The precise shape of a computer system is determined b the constraints and objectives for which it was designed. **VARIABLES IN DESIGN** - Standards - Cost - Memory Space - Latency - Throughput - Features - Scale - Weight - Reliability - Expandability - Power Consumption **COST** - Generally kept constant and are dictated by device or commercial criteria **PERFORMANCE** **CLOCK SPEED** - The number of cycles per second that the CPU's main clock runs at. - Often used to describe a computers output in MHz or GHz - A system with a higher clock rate does not always mean it will work better **SPEED OF A COMPUTER** - Amount of cache a processor has **PROCESSOR** - The faster the processor runs, the higher its speed and the larger its cache **THINGS THAT AFFECT THE SPEED OF A COMPUTER** - Number of functional units in the system - Bus speed - Usable memory - Type and order of instructions in the program being executed **TWO MAIN TYPES OF SPEED OF A COMPUTER** 1. **LATENCY** - Refers to the interval between the start of a process and its completion **INTERRUPT LATENCY** - Refers to the system's guaranteed optimal response time to an electronic event **LOW INTERRUPT LATENCY** - Needed by computers that control machinery to prevent system failure as they work in real-time environment. 2. **THROUGHPUT** - Refers to the sum of work completed per unit of time. **PERFORMANCE DEPENDING ON THE APPLICATION DOMAIN** - Other metrics may be used to assess a computer's success. A device may be: - **CPU Bound** (numerical calculation) - **I/O Bound** (web serving application) - **Memory Bound** (database application) **BENCHMARKING ATTEMPT** - The process of running a series of tests on a computer program, a set of programs, or other operations, in order to assess the relative performance of a computer. - May reveal strengths, but it may not aid in the selection of a computer - Example: other device may be better at handling science applications than another at playing common video games - Designers have also been known to include special features in their products that allow a particular benchmark to run quickly but do not provide similar benefits for other more general tasks **EVOLUTION OF COMPUTERS** **THE EARLY YEARS** **COUNTING MACHINE INVENTIONS** --------------------------------- ------------------------------------------------- **YEAR** **NAME** **200 BC** Chinese Abacus **500 BC** Egyptian Abacus **1620** Napier's Bone (John Napier) **1653** Pascaline (Blaise Pascal) **1673** Leibniz's (Gottfried Wilhelm Von Leibniz) **1801** Weaving loom (Joseph Marie Jacquard) **1823** Mechanical Calculator Machine (Charles Babbage) **1941** Mark 1 (Harvard University) **FIRST GENERATION COMPUTERS (1941 -- 1956)** **1941** - Huge, slow, expensive, and often unreliable - Built the ENIAC ***(Electronic Numerical Integrator and Computer)*** that uses vacuum tube **1951** - Eckert and Mauchly built the UNIVAC ***(Universal Automatic Computer)*** that can calculate 10,000 addition per second. **VACUUM TUBE** - An electronic tube about the size of light bulbs, used as the internal computer components. **PROBLEMS OF USING VACUUM TUBES** - Generates a great deal of heat causing many problems in regulation and climate control - Burnt out frequently - People operating the computer did not know that the problem was in the programming machine **SECOND GENERATION COMPUTERS (1965 -- 1963)** **FAMOUS COMPUTER SCIENTISTS DURING THIS ERA** - John Bardeer - Walter Houser Brattain - William Shockley **TRANSISTOR** - Used in the 2^nd^ generation computers - Smaller than vacuum tubes - Consumes less energy - No warmup time is needed - Generated much less heat - Faster and more reliable **THIRD GENERATION COMPUTERS (1964 -- 1971)** **1964** - The ***IBM 370*** series was introduced, and it came in several model and sizes. - Was used for business and scientific programs - Other computer models introduced were ***CDC 7600*** and ***B2500*** - Manufactured in 1961 at Silicone Valley **MICROCHIP** - A device that replaced the Magnetic Core Memory **INTEGRATED CIRCUIT TECHNOLOGY** - Used in 3^rd^ generation computers that reduced the size and cost of computers **ADVANTAGES OF INTEGRATED CIRCUIT (IC)** - Reliable, compact, and cheaper - Hardware and software are sold separately which created the software industry - Customer service industry flourished (reservation and credit checks) - More sophisticated - Several programs run at the same time - Sharing computer resources - Support interactive processing **FOURTH GENERATION COMPUTERS (1971 -- PRESENT)** - Took 55 years for the 4 generations to evolve - Hardware technology such as silicone chips, microprocessor, and storage devices were invented **COMPUTER MODELS INVENTED IN THE 4^TH^ GENERATION** - Apple Macintosh - Dell - IBM - Acer **1971** - Intel created the first microprocessor - Steve Jobs built the 1^st^ Apple computer **1981** - IBM introduces its first personal computer **MICROPROCESSOR** - A specialized chip which is developed for computer memory and logic - A large-scale integrated circuit which contained thousands of transistor that is capable of performing all of the functions of a computer's CPU **ADVANTAGES OF MICROPROCESSORS** - Computers became 100 times smaller than ENIAC - Gain in speed, reliability, and storage capacity - Personal and software industry boomed **FIFTH GENERATION COMPUTERS (PRESENT AND BEYOND)** - Technologies are more advanced and still being developed so that it is more efficient - Contains programs which translate languages **FIFTH GENERATION COMPUTERS** - Silicone Chips - Processor - Robotics - Virtual Reality - Intelligent Systems **NEW ERA COMPUTERS** - Technology of computers that are more advanced, sophisticated, and modern **LATEST INVENTION OF NEW ERA** - Supercomputers - Mainframe computers - Minicomputers - Personal computers - Mobile computers **INSTRUCTION SET ARCHITECTURE** - A specific set of standardized programmer-visible interface to hardware - A set of ***instructions*** with associated argument fields, assembly syntax, and machine encoding - A set of named ***storage locations*** (registers, memory) - A set of ***addressing modes*** (ways to name location) - A part of the processor architecture - Works as an interface between hardware and software - The entire group of commands that the processor can perform, to execute the program instructions A diagram of a software development Description automatically generated with medium confidence **INTEL 8085 ISA** - Consist of 246 operation code and 74 instructions ![A diagram of a instruction set Description automatically generated](media/image2.png) **HOW INSTRUCTION SET ARCHITECTURE IS IMPLEMENTED?** **MICROARCHITECTURE** - The processor architecture at the hardware level - Defined as the actual hardware implementation of the ISA - The hardware circuitry of the processor chip that implements one particular ISA ![A screenshot of a computer program Description automatically generated](media/image4.png) A diagram of a computer program Description automatically generated **FOUR STAGES OF THE PROGRAM COMPILATION PROCESS** 1. **PRE PROCESSOR** will replace pre processor directives with actual code to get PURE HLL. 2. **COMPILER** accepts pure hll as input and converts into Assembly language as output 3. **ASSEMBLER** accepts assembly code as input and converts binary machine object code as output 4. **LINKER** links all object code files, library references and provide executable file as output ![](media/image6.png) **CLASSIFICATION OF ARCHITECTURE BASED ON ADDRESSING MODES** 1. **STACK ARCHITECTURE** - Operands implicitly on top of a stack 2. **ACCUMULATOR ARCHITECTURE** - One operand is implicitly an accumulator - Essentially, this is a 1 register machine 3. **GENERAL PURPOSE REGISTER ARCHITECTURE** a. **REGISTER MEMORY ARCHITECTURE** - One operand can be a memory b. **LOAD-STORE ARCHITECTURE** - All operands are registers (except for load/store) ![](media/image8.png) A white rectangular box with red text Description automatically generated ![A table with text and images Description automatically generated with medium confidence](media/image10.png) **GENERAL PURPOSE REGISTER ISAs** 1. **LOAD/STORE (0,3)** - ALU instruction: 3 register, 3 operands - Special load/store instructions for accessing memory - Easy to pipeline, but higher instruction count (RISC) 2. **REGISTER-MEMORY (1,2)** - ALU instruction: 1 register, 2 operands - Harder pipelining, but more compact program (x86) 3. **MEMORY-MEMORY (2,2) or (3,3)** - ALU instruction: 2/3 memory, 2/3 operands - Hardest pipelining, most compact program - Not used today (DEC VAXX) **OPERATION TYPES IN THE INSTRUCTION SET** **OPERATOR TYPE** **EXAMPLES** ---------------------------- ---------------------------------------------------------------- **Arithmetic and logical** Integer arithmetic and logical operations: add, or **Data transfer** Load-stores (move on machines with memory addressing) **Control** Branch, jump, procedure call, and return, traps **System** OS call, virtual memory management instructions **Floating point** Floating point operations: add, multiply **Decimal** Decimal add, decimal multiply, decimal to character conversion **String** String move, string compare, string search **Graphics** Pixel operations, compression/decompression operations **COMPLEX INSTRUCTION SET COMPUTER (CISC)** - Have a larger set of instructions with many addressing modes **REDUCED INSTRUCTION SET COMPUTER (RISC)** - Have a smaller set of instructions with few addressing modes ![](media/image12.png) A close-up of a list of sizes Description automatically generated ![A graph with numbers and text Description automatically generated](media/image14.png) A close-up of a computer code Description automatically generated **COMPUTER METRICS** **METRICS** - Measures of quantitative assessment commonly used for comparing and tracking performance or production. **CRITERIA OF METRICS (DAVID LILJA)** - **LINEARITY** - Suggests that a metric should be linear, increasing the performance of a computer by a fraction x should be reflected by an increase of fraction x in the metric. - **RELIABILITY** - Should be reliable and correctly indicate whether one computer is faster than another. - Also called ***monotonicity***, an increase in the value of metric should indicate an increase in the speed of the computer. - **REPEATABILITY** - Always yield the same result under the same condition. - **EASE OF MEASUREMENT** - If it is difficult to measure, an independent tester will have great difficulty in confirming it. - **CONSISTENCY** - If it is precisely defined and can be applied across different systems. - Should be called ***universality*** or ***generality*** to avoid confusion with repeatability. - **INDEPENDENCE** - Independent of commercial influence **TERMINOLOGY FOR COMPUTER PERFORMANCE** - **EFFICIENCY** - An indication of the fraction of time that it is doing a useful work. \ [\$\$\\mathbf{Efficiency = \\ }\\frac{\\mathbf{\\text{total\\ time\\ executing\\ useful\\ work}}}{\\mathbf{\\text{total\\ time}}}\\mathbf{= \\ }\\frac{\\mathbf{\\text{optimal\\ time}}}{\\mathbf{\\text{actual\\ time}}}\$\$]{.math.display}\ ![](media/image16.png) - **THROUGHPUT** - A measure of the amount of work it performs per unit time. - Amount of work = instruction execution is meaningful only if the instructions are performing useful calculations. - **LATENCY** - the delay between activating a process and the start of operation. - The waiting time - **RELATIVE PERFORMANCE** - Refers to how one computer performs with respect to another. - The relative performance of computer A and B is the inverse of their execution times. \ [\$\$\\mathbf{\\text{Performance}}\_{\\mathbf{A\\\_ to\\\_ B}}\\mathbf{= \\ }\\frac{\\mathbf{\\text{performance}}\_{\\mathbf{\\text{computerA}}}}{\\mathbf{\\text{Performance}}\_{\\mathbf{\\text{computerB}}}}\\mathbf{= \\ }\\frac{\\mathbf{\\text{execution\\ time}}\_{\\mathbf{\\text{computerB}}}}{\\mathbf{\\text{execution\\ time}}\_{\\mathbf{\\text{computerA}}}}\$\$]{.math.display}\ **REFERENCE MACHINE** - Previous machine, same machine without improvement, a competitor's machine - Also called ***baseline machine*** - A measure of relative performance \ [\$\$\\mathbf{Speedup\\ Ratio = \\ }\\frac{\\mathbf{\\text{execution\\ time\\ on\\ reference\\ machine}}}{\\mathbf{\\text{execution\\ time}}}\$\$]{.math.display}\ ![](media/image18.png) - **CLOCK RATE** - the most obvious indicator of a computer's performance. - The speed at which fundamental operations are carried out within the computer. - Also considered the worst metric by which to judge by the computer - A poor indicator of performance because there's no simple relationship between clock rate and system performance across different platforms. **MILLIONS OF INSTRUCTIONS PER SECOND (MIPS)** - Removes the discrepancy between systems with different numbers of clocks per operation by measuring instructions per second rather than clocks per second. - Tells only how fast a computer executes instructions, but doesn't tell what is actually achieved by the instructions being executed \ [\$\$\\mathbf{MIPS = \\ }\\frac{\\mathbf{n}}{\\mathbf{t}\_{\\mathbf{\\text{execute}}}\\mathbf{\\times}\\mathbf{10}\_{\\mathbf{6}}}\$\$]{.math.display}\ **Where:** [**n**]{.math.inline} is the number of instructions executed [**t**~execute~]{.math.inline} is the time taken to execute them ![](media/image20.png) A diagram of a computer keyboard and a keyboard Description automatically generated ![](media/image22.png) **ASSEMBLY LANGUAGE PROGRAMMING** ![](media/image24.png) **MACHINE LANGUAGE** - Native to a processor: executed directly by hardware - Instructions consist of binary code: 1s and 0s **ASSEMBLY LANGUAGE** - A programming language that uses symbolic names to represent operations, registers, and memory locations - Slightly higher-level language - Readability of instructions is better than machine language - One-to-one correspondence with machine language instructions **ASSEMBLERS** - Translate assembly to machine code **COMPILERS** - Translate high-level programs to machine code either directly or indirectly via an assembler A diagram of a language Description automatically generated **INSTRUCTION** - Each command of a program that instructs the computer what to do - Must be in binary format **INSTRUCTION SET** - The set of all instructions in binary form that makes up the computer's machine language **MACHINE LANGUAGE INSTRUCTIONS** - Usually made up of several fields that specifies different information for the computer **MAJOR TWO FIELDS OF MACHINE LANGUAGE INSTRUCTIONS** - **OPCODE FIELD** - Stands for ***operation code*** and it specifies that particular operation that is to be performed - Each operation has its unique opcode - **OPERANDS FIELD** - Specify where to get the source and destination operands for the operation specified by the opcode. - The source/destination of operands can be a constant, the memory or one of the general-purpose registers **ASSEMBLY VS. MACHINE CODE** ![](media/image26.png) A screenshot of a computer Description automatically generated **MAPPING BETWEEN ASSEMBLY LANGUAGE AND HIGH-LEVEL LANGUAGE** - Translating HLL programs to machine language programs is not a one-to-one mapping - A HLL instruction (usually called a statement) will be translated to one or more machine language instructions ![A screen shot of a computer program Description automatically generated](media/image28.png) **ADVANTAGES OF HIGH-LEVEL LANGAUGES** - Program development is faster - Program maintenance is easier - Programs are portable **WHY LEARN ASSEMBLY LANGUAGE?** - **Accessibility to system hardware** - Assembly language is useful for implementing system software - Also useful for small embedded system applications - **Space and time efficiency** - Understanding sources of program inefficiency - Tuning program performance - Writing compact code - **Writing assembly programs gives the computer designer the needed deep understanding of the instruction set and how to design one** - **To be able to write compilers for HLL, we need to be experts with the machine language. Assembly programming provides this experience** **ASSEMBLY VS. HIGH-LEVEL LANGUAGE** **ASSEMBLER** - A program that converts source code programs written in assembly language into object files in machine language **POPULAR ASSEMBLER FOR THE INTEL FAMILY OF PROCESSORS** - **TASM** (Turbo Assembler for Borland) - **NASM** (Netwide Assembler for both Windows and Linux) - **GNU** (assembler distributed by the free software foundation) **LINKER PROGRAM** - Needed to produce executable files - Combines your program's object file created by the assembler with other object files and link libraries, and produces a single executable program - The linker program provided with MASM distribution for linking 32-bit programs **IRVINE32.LIB** - A link library for input and output developed by ***KIP IRVINE*** - Works in Win32 console mode under MS-Windows **ASSEMBLE AND LINK PROCESS** - A project may consist of multiple source files - Assembler translates each source files separately into an object file - Linker links all object files together with link libraries ![](media/image30.png) **DEBUGGER** - Allows you to trace the execution of a program - Allows you to view code, memory, registers, etc. A screenshot of a computer Description automatically generated **EDITOR** - Allows you to create assembly language source files - Some editors provide syntax highlighting features and can be customized as a programming environment **INSTRUCTION SET ARCHITECTURE** - Collection of assembly/machine instruction set of the machine - Machine resources that can be managed with these instructions - Memory - Programmer-accessible registers - Provides a hardware/software interface ![A computer system diagram with text Description automatically generated](media/image32.png) A screenshot of a computer program Description automatically generated ![A white text with blue text Description automatically generated with medium confidence](media/image34.png) **COMPUTER ARITHMETIC** **BINARY OVER DECIMAL** **INFORMATION** - Handled in a computer by electronic/electrical components - Operate in binary mode - Can only indicate 2 states (on \[1\] or off \[0\]) - Has only 2 digits (0 and 1), and is suitable for expressing 2 possible states - Computer circuits only have to handle 2 binary digits rather than ten decimal digits causing: - Simpler internal circuit design - Less expensive - More reliable circuits **EXAMPLE OF FEW DEVICES THAT WORK IN BINARY MODE** **BINARY ARITHMETIC** - Simple to learn as binary number system **BINARY ADDITION** ![](media/image36.png) ![](media/image38.png) **BINARY SUBTRACTION** ![](media/image40.png) **COMPLEMENT OF A NUMBER** ![](media/image42.png) **COMPLEMENT OF A BINARY NUMBER** - Obtained by transforming all its 0's and 1's and all its 1's and 0's. ![](media/image44.png) **COMPLEMENTARY METHOD OF SUBTRACTION** - Additive approach of subtraction **STEP 1:** Find the complement of the number you are subtracting (subtrahend). **STEP 2:** Add this to the number from which you are taking away (minuend). **STEP 3:** If there is a carry of 1, add it to obtain the result; if there is no carry, recomplement the sum and attach a negative sign. A white paper with black text Description automatically generated ![A white sheet with black text Description automatically generated](media/image46.png) **BINARY SUBTRACTION USING COMPLEMENTARY METHOD** A screenshot of a math problem Description automatically generated ![A screenshot of a computer Description automatically generated](media/image48.png) **BINARY MULTIPLICATION** A math problem with numbers Description automatically generated with medium confidence ![A screenshot of a computer Description automatically generated](media/image50.png) A screenshot of a math problem Description automatically generated **BINARY DIVISION** ![A white background with black text Description automatically generated](media/image52.png) **RULES FOR BINARY DIVISION** 1. Start from the left of the dividend. 2. Perform a series of subtractions in which the divisor is subtracted from the dividend. 3. If subtraction is possible, put a 1 in the quotient and subtract the divisor from the corresponding digits of dividend. 4. If subtraction is not possible (divisor greater than remainder), record a 0 in the quotient 5. Bring down the next digit to add to the remainder digits. Proceed as before in a manner similar to long division. A white paper with black text Description automatically generated **ADDITIVE METHOD OF MULTIPLICATION AND DIVISION** - Most computers use additive methods for performing multiplication and division operations because it simplifies the internal circuit design of computer systems. ![](media/image54.png) **RULES FOR ADDITIVE METHOD OF DIVISION** - Subtract the divisor repeatedly from the dividend until the result of subtraction becomes less than or equal to zero. - If result of subtraction is zero, then: - quotient = total number of times subtraction was performed - remainder = 0 - If [result of subtraction is less than zero], then: - quotient = total number of times subtraction was performed minus 1 - remainder = result of the subtraction previous to the last subtraction **PROCESSOR ORGANIZATION** - refers to the way a given instruction set architecture is implemented in a particular processor - determines how the CPU processes instructions, manages dataflow, and interfaces with memory and input/output devices **PROCESSOR** - also referred to as CPU - a sophisticated machine to process data - executes its native instructions over data **CLASSIFICATION OF PROCESSOR** 1. **VON-NEUMANN ARCHITECTURE** - Designed by ***John Von Neumann*** and his colleagues in 1946 - Stores program and data in same memory - Designed to overcome the limitation of previous ENIAC computer - The task of entering and altering programs for the ENIAC was extremely tedious. ![Diagram of a computer diagram Description automatically generated](media/image56.png) 2. **HARVARD ARCHITECTURE** - Physically separate storage and signal pathways for instructions and data - Originated from the ***Harvard Mark relay-based computer***, which stored: - Instructions on punched tape (24 buts wide) - Data in electro-mechanical counters - In some systems, instructions can be stored in read-only memory while data memory generally requires read-write memory - In some systems, there is much more instruction memory than data memory - Used in MCS-51, MIPS, etc. A diagram of a computer system Description automatically generated **COMPONENTS OF PROCESSOR** 1. **PROCESSOR CONTROL UNIT (CU)** - A unit that controls all activities that go on within the device 2. **ARITHMETIC LOGIC UNIT** - It is where all calculation and logic operations happen - Made up of arithmetic and logic units - Deals with basic arithmetic operation such as addition, subtraction, multiplication, and division - Caters to logic operations such as AND, OR, NOT, NOR 3. **MAIN MEMORY (RAM)** - Counted as part of the CPU, but it is not integrated within the microchip that carries other components. - Used to temporarily store data that is required for processing within the CPU 4. **CENTRAL PROCESSING UNIT REGISTERS** - The very first temporary memories that are integrated within the processor **SEVERAL REGISTERS THAT ARE USED FOR DIFFERENT FUNCTION WITHIN THE PROCESSOR** - **PROGRAM COUNTER** - Stores the next instruction - **MEMORY ADDRESS REGISTER (MAR)** - Stores the address of the next instruction - **MEMORY DATA REGISTER (MDR)** - Contains data that is to be read-written to or from the address - **CURRENT INSTRUCTION REGISTERS (IR)** - Stores instructions being executed currently - **ACCUMULATOR REGISTE (AR)** - Used to store intermediate results that are produced. 5. **COMPUTER BUSES** - Pathways that data travels when the CPU is communication with RAM and other i/o devices **3 TYPES OF COMPUTER DEVICES** 1. **ADDRESS BUS** - It carries the address of where data is going to and coming from in the RAM 2. **CONTROL BUS** - It carries control signal data that is used to control the communication between devices 3. **DATA BUS** - It carried the real data that is being transmitted from one location to another 6. **CACHE MEMORY** - A memory type that is used to store data that is frequently used by the CPU - Increases the access speed of data since it is faster than the RAM **2 TYPES PF CACHE MEMORY** 1. **PRIMARY CACHE** - Mostly integrated within the processor 2. **SECONDARY CACHE** - Can either be within or outside the processor 7. **CPU CLOCK** - An electronic pulse that determinates the number of cycles that a CPU executes instructions per second. - The cycles are measured in ***Hertz*** - The current computer processor has a speed of ***Gigahertz*** (billion cycles per second) **INTERNAL INSTRUCTIONS OF A CPU** - The micro-operations that dictate how data is processed within the processor. **INTERNAL INSTRUCTIONS OF A CPU** - **DATA TRANSFER INSTRUCTIONS** - Move data between registers, memory, and I/O devices - **ARITHMETIC INSTRUCTIONS** - Perform basic arithmetic operations on data - **LOGIC INSTRUCTIONS** - Execute logical operation on binary data - **CONTROL INSTRUCTIONS** - Manage the sequence of operations, including jumps, branches, and calls - **SHIFT INSTRUCTIONS** - Manipulate the bit positions within registers for operations like multiplication or division by powers of 2 - **STATUS REGISTER MANIPULATION** - Update flags in the status register that include conditions such as zero, carry, overflow, or sign **MEMORY SYSTEM ORGANIZATION AND ARCHITECTURE** **MEMORY** - The faculty of the brain by which data or information is encoded, stored, and retrieved when needed. **TWO CATEGORIES OF MEMORY** 1. **VOLATILE MEMORY** - This loses data when power is switched off 2. **NON-VOLATILE MEMORY** - This is a permanent storage and does not lose any data when power is switched off - A pyramid structure that is commonly used to illustrate the significant differences among memory types **MEMORY HIERARCHY SYSTEM** - Consist if all storage devices employed in a computer system **MEMORY ACCESS METHODS** - **RANDOM ACCESS** - The ability to access memory location directly and in a constant amount of time, regardless of the position of the data in memory. - **SEQUENTIAL ACCESS** - Allows data to be read or written in a sequence or in order - **DIRECT ACCESS** - A technique where data can be accessed from a storage device without needing to sequentially traverse through other data. - Each memory location or storage unit has a unique address, allowing the system to retrieve or store data directly at that location **MAIN MEMORY** - Memory unit that communicates directly within the CPU - The central storage unit of the computer system - A large and fast memory used to store data during computer operations - Made up od RAM and ROM **RANDOM ACCESS MEMORY (RAM)** - A type of computer memory that allows data to be read from and written to in any order, giving it its "random-access" capability - Loses its stored data when the system is powered off **READ ONLY MEMORY (ROM)** - Used for storing programs tat are permanently resident in the computer - Needed for storing an initial program called ***bootstrap loader***, which is needed to start the computer software when power is turned off **RAM TYPES** - **STATIC RAM (SRAM)** - Each cell stored bit with a 6-transistor (diode) circuit - Retains value indefinitely, as long as it is kept powered - Relatively insensitive to disturbances such as electrical noise - Faster and more expensive that DRAM - **DYNAMIC RAM (DRAM)** - Each cell stores bit with a capacitor and transistor - Value must be refreshed every 10-100 ms. - Sensitive to disturbances - Slower and cheaper than SRAM **ROM TYPES** - **MASKED ROM** - Programmed with its data when the chip is fabricated - **PROM** - Programmable ROM, by the user using a standard PROM programmer, by burning some special type of fuses - Once programmed, it will not be possible to program it again - **EPROM** - Erasable ROM, the chip can be erased and reprogrammed - Programming process consists in charging some internal capacitors; the UV light (method of erase) makes those capacitors to leak their charge, thus resetting the chip - **EEPROM** - Electrically Erasable PROM: it is possible to modify individual locations of the memory, leaving others unchanged; one common use of the EEPROM is in BIOS of personal computers. **AUXILIARY MEMORY** - bulk of information is stored in the auxiliary memory - also called backing storage or secondary storage **AUXILIARY MEMORY DEVICES** - Memory Card Reader - USB Flash Memory - Media Devices - External Optical Drive **CACHE MEMORY** - The data or contents of the main memory that are used again and again by CPU, are stored in the cache memory so that we can easily access that data in shorter time. - Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not found in cache memory, then the CPU moves onto the main memory. It also transfers block of recent data into the cache and keeps on deleting the old data in cache to accommodate the new one **L1 CACHE (2kb -- 64kb)** - Also known as primary cache or level 1 cache - The topmost cache in the hierarchy of cache levels of a CPU - The fastest cache - Has a smaller size and smaller delay **L2 CACHE (256kb -- 512kb)** - Next to L1 in hierarchy - Accessed only if the data looking for is not found in L1 - Typically implemented using DRAM **L3 CACHE (1mb -- 8mb)** - That largest among all the cache, even though it is slower, it is still faster than the RAM **ASSOCIATIVE MEMORY** - Also known as ***Content Addressable Memory (CAM)*** - Enables data retrieval based on the content of the data rather than its specific address in memory - Allows for storage and retrieval of information by matching a given input with stored patterns or data **ASSOCIATIVE MEMORY IS ORGANIZED IN SUCH A WAY:** - **ARGUMENT REGISTER (A)** - It contains the word to be searched - Has n bits (one for each bit of the word) - **KEY REGISTER (K)** - Specifies which part of the argument word needs to be compared with words in memory - If all bits in register are 1, the entire word should be compared - Otherwise, only the bits having k bit set to 1 will be compared - **ASSOCIATIVE MEMORY ARRAY** - It contains the words which are to be compared with the argument word - **MATCH REGISTER (M)** - It has m bits, one bit corresponding to each word in the memory array. - After matching words in match, registers are set to 1 **INPUT/OUTPUT INTERFACING AND COMMUNICATION** - Refers to the interaction between a system and the external environment, typically involving the exchange of data between internal components of a device and the outside world **I/O PROBLEMS** - **Wide variety of peripherals** - **Need I/O modules to act as a bridge between processor/memory bus and the peripherals** ![A diagram of a software development Description automatically generated](media/image58.png) **EXTERNAL DEVICES** - A device that provides a means of exchanging data between the external environment and the computer - Attaches to the computer by a link to an I/O module - Also referred to as a ***peripheral device/peripheral*** **3 CATEGORIES OF EXTERNAL DEVICES** 1. **HUMAN READABLE** - Communicate with the computer user - Monitor, printer, keyboard, mouse 2. **MACHINE READABLE** - Communicate with equipment - Hard drive, CD ROM, sensors, actuators 3. **COMMUNICATION** - Suitable with remote devices - Modem, network, interface card A diagram of external data Description automatically generated **EXTERNAL DEVICE INTERFACE** - **CONTROL SIGNAL** - Determine the function that the device will perform (READ/WRITE) - **DATA** - In the form of a set of bits to be sent to or received from the i/o module - **STATUS SIGNAL** - Indicates the state of the device (READY, NOT READY) - **CONTROL LOGIC** - Associated with the device, controls the device operation in response to direction from the i/o - **TRANSDUCER** - Converts data from electrical to other form of energy during output and from other forms to electrical during input - **BUFFER** - Temporarily holds data being transferred **I/O MODULE** - Hardware that connects the CPU with external devices - Acts as a translator, allowing different devices to communicate with the computer system **MAJOR FUNCTION OF AN I/O MODULE** - **CONTROL AND TIMING** - To coordinate the flow of traffic between internal resources and external resources - **CPU COMMUNICATION** - Communicate with the processor in terms of accepting commands from processor, exchange data, status reporting, and address recognition - **DEVICE COMMUNICATION** - Communicate with external devices - **DATA BUFFERING** - Temporarily hold data being transferred between the I/O module and external devices - **ERROR DETECTION** - Detect errors and report errors to the processor ![A diagram of a computer system Description automatically generated](media/image60.png) **PROGRAMMED I/O** - One of the I/O technique other than the interrupt driven I/O and direct memory access (DMA) - The most simple type of I/O technique for the exchanges of data or any types of communication between the processor and the external devices. **PROGRAMS EXECUTED BY THE PROCESSOR TO CONTROL I/O OPERATION** - Sensing device status - Sending a read or write command - Transferring data **OPERATION OF THE PROGRAMMED I/O** - The processor is executing a program and encounters an instruction relating to i/o operation - The processor then executes that instruction by issuing a command to the appropriate I/O module - The I/O module will perform the requested action based on the I/o command issued by the processor (READ/WRITE) and set the appropriate bits in the I/O status register - The processor will periodically check the status of the I/O module until it finds that the operation is complete A diagram of a program Description automatically generated ![A diagram of a program Description automatically generated](media/image62.png) **INTERRUPT DRIVERS I/O** - A method of managing i/o activity where a peripheral device signals the need for data transfer, triggering a program interrupt - The processor responds by entering an interrupt service routine, which handles the request based on the system's interrupt levels and priorities - While this technique demands more complex hardware and software, it greatly improves the efficiency of the computer's time and resources **FOR INPUT** - The device interrupts the CPU when new data has arrived and is ready to be retrieved by the system processor. The actual actions to perform depend on whether the device uses I/O ports or memory mapping **FOR OUTPUT** - The device delivers an interrupt either when it is ready to accept new data or to acknowledge successful data transfer. Memory-mapped and DMA-capable devices usually generate interrupts to tell the system they are done with the buffer **BASIC OPERATIONS OF INTERRUPT** 1. CPU issues read command 2. I/O module gets data from peripheral whilst CPU does other work 3. I/O module interrupts CPU, CPU requests data 4. I/O module transfers data **ADVANTAGES** - Fast - Efficient **DISADVANTAGES** - Can be tricky to write if using a low-level language - Can be tough to get various pieces to work well together - Usually done by the hardware manufacturer/OS maker, e.g. Microsoft **2 MAIN PROBLEMS OF INTERRUPT I/O** 1. There are multiple I/O module, how should the processor determine the device that issued the interrupt signal? 2. How does the processor decide which module to process when multiple interrupts have occurred? **4 MAIN WAYS TO COUNTER THE PROBLEMS** 1. Multiple Interrupt Lines 2. Software Poll 3. Daisy Chain (Hardware Poll, Vectored) 4. Bus Arbitration (vectored) **DIRECT CACHE MEMROY ACCESS** **CACHE MEMORY** - A small, high-speed RAM buffer located between the CPU and main memory - Accelerates your computer while keeping the price low **ADVANTAGES OF CACHE MEMORY** - The main memory is slower than cache memory - It creates a way for fast data transfers, so it consumed less access time as compared to main memory - It stores frequently access that can be executed within a short period of time **DISADVANTAGES OF CACHE MEMORY** - It is limited capacity memory - It is very expensive as compared to RAM and hard disk **DIRECT ACCESS MEMORY (DMA)** - A feature of computerized systems that allows certain hardware subsystems to access main system memory independently of the CPU - Provides data access to the memory while the microprocessor is temporarily disabled - Both CPU and DMA controller have access to main memory via a shared system bus having data, address and control lines - Sometimes referred to as a channel - A special interface circuit that connects I/O devices to the system bus - May be incorporated directly into the I/O device **DMA TRANSFER** - Used to do high-speed memory-to-memory transfer - Can be done in two ways