Computer Organization and Architecture Lecture Notes Fall 2024 - Mansoura University PDF

Summary

These lecture notes cover Computer Organization and Architecture, specifically focusing on basic concepts. The document encompasses the topics of computer arithmetic, computer architecture (instruction set: CPU), computer organization, and parallel organization.

Full Transcript

Mansoura University Faculty of Computers and Information Department of Computer Science Fall 2024 Computer Organization and Architecture Lecture 1: Basic Concepts Programs: AI - L100 , MI & SE - L200 DR. Muhammad Haggag Zayyan CS Department COURSE AGENDA ...

Mansoura University Faculty of Computers and Information Department of Computer Science Fall 2024 Computer Organization and Architecture Lecture 1: Basic Concepts Programs: AI - L100 , MI & SE - L200 DR. Muhammad Haggag Zayyan CS Department COURSE AGENDA  Part 1: Basic concepts - Computer Arithmetic - Computer Architecture (Instruction set : CPU)  Part 2: Computer Organization: Computer system (Memory) - I/O Modules.  Part 3: Parallel Organization – Computer Performance 3 LECTURE TOPICS  Computer Architecture and Computer Organization  Structure and Function  Hardware description  Computer History: IAS computer 4 TRANSFORMATION LEVEL FROM PROBLEM TO PHYSICS 6 COMPUTER ARCHITECTURE AND COMPUTER ORGANIZATION  Computer Architecture: Computer Architecture is a functional description of requirements and design implementation for the various parts of computer. It deals with functional behavior of computer system. It comes before the computer organization while designing a computer. (describes what the computer does.)  Computer Organization: Computer Organization comes after the decide of Computer Architecture first. Computer Organization is how operational attribute are linked together and contribute to realize the architectural specification. Computer Organization deals with structural relationship. (describes how it does it.) 7 COMPUTER ARCHITECTURE VS COMPUTER ORGANIZATION Computer Architecture Computer Organization Architecture describes what the computer does. Organization describes how it does it. Computer Architecture deals with functional behavior of Computer Organization deals with structural relationship. computer system. It acts as the interface between hardware and software. It deals with the components of a connection in a system. Architecture indicates its hardware. Organization indicates its performance. For designing a computer, organization is decided after its For designing a computer, its architecture is fixed first. architecture. Architecture involves Logic (Instruction sets, Addressing Organization involves Physical Components (Circuit design, modes, Data types, Cache optimization) Adders, Signals, Peripherals) Intel, AMD, use the same architecture Intel, AMD, utilize different organization of the same architecture 8 Example: Multiplication Physical Components: Multiplication circuit or Shift circuit ARCHITECTURE / ORGANIZATION EXAMPLE Reference of instructions clocks: http://www.penguin.cz/~literakl/intel/ Architecture: Design a computer that performs Multiplication operation. Organization: 1. Using multiplication circuit. 5*8 = 40 2. Using Shift Register circuit. 5*8 = 5*23 = 5 shift left by 3 5*9 = 5*(8+1) = 5*(23 + 20) = 5 shift left by 3 + 5 = 40+5 = 45  For less values of shift moves, Shift/Add circuits are faster (high speed) than multiplication circuit.  Compiler can convert Multiplication operations into Shift/Add operations. 9 9 STRUCTURE AND FUNCTION  Structure: The way in which the components are interrelated.  Function: The operation of each individual component as part of the structure. 10 10 FUNCTION Basic functions that a computer can perform:  Data processing: Data may take a wide variety of forms, and the range of processing requirements is broad.  Indexing: indexing of large datasets created by Web crawler engines.  Image processing: image conversion (e.g., enlarging an image or creating thumbnails). They can also be used to compress or encrypt images.  Data storage: computer must temporarily store at least those pieces of data that are being worked on at any given moment. Equally important, the computer performs a long-term data storage function (Files of data).  Data movement: when data are received from or delivered to a device that is directly connected to the computer, the process is known as input–output (I/O), and the device is referred to as a peripheral. When data are moved over longer distances, to or from a remote device, the process is known as data communications.  Control: Within the computer, a control unit manages the computer’s resources and orchestrates the performance of its functional parts in response to instructions. 11 STRUCTURE Traditional single-processor computer  Central processing unit (CPU): Controls the operation of the computer and performs its data processing functions; often simply referred to as processor.  Main memory: Stores data  I/O: Moves data between the computer and its external environment.  System interconnection: provides for communication among CPU, main memory, and I/O. A common example of system interconnection is by means of a system bus. CPU consists of  Control unit: Controls the operation of the CPU and hence the computer.  Arithmetic and logic unit (ALU): Performs the computer’s data processing functions.  Registers: Provides storage internal to the CPU.  CPU interconnection: Some mechanism that provides for communication among the control unit, ALU, and registers. 12 SIMPLE SINGLE-PROCESSOR COMPUTER: TOP-LEVEL STRUCTURE 13 CONTROL MEMORY Two memory units:  Main memory – stores instructions and data  User/Modified  Control memory – stores microprogram  System/ROM – inform processor how to execute each instruction  Found in control unit.  Each instruction can be broken down into microoperations, which involve:  Registers: Temporary storage for data being processed.  Arithmetic Logic Unit (ALU): Executes arithmetic and logical operations based on control signals.  Data Path: The pathways through which data moves within the CPU, controlled by signals from the control memory. 14 HARDWARE DESCRIPTION  Register Transfer Notation (RTN):  RTN is used to specify the microoperations [Microarchitecture Design] that a CPU performs at the register level. It helps designers describe how data moves between registers and how arithmetic or logical operations are carried out.  Also describes conditional information in the system which cause operations to come about.  A “shorthand” notation for microoperations.  The possible locations in which transfer of information occurs are:  Memory-location  Processor Register  The general format of an RTN statement: 15 Conditional information: Action1, Action2 MORE EXAMPLES  Arithmetic operations(addition, subtraction,..) are allowed  such as: R3  R1 +R2  Transfer the contents of bits 0 to 7 from register R1 into register R2 at location L.  R2(L)←R1(0−7) means:  Logic operations (AND, OR, XOR)  such as : R1  R1 ⊕ R2  Shifts operations are allowed: 16 MULTICORE COMPUTER  multicore computer have multiple processors.  When these processors all reside on a single chip, the term multicore computer is used, and each processing unit (consisting of a control unit, ALU, registers, and perhaps cache) is called a core. 17 SIMPLIFIED VIEW OF MAJOR ELEMENTS OF A MULTICORE COMPUTER  A prominent feature of contemporary computers is the use of multiple layers of memory, called cache memory, between the processor and main memory. Simply note that a cache memory is smaller and faster than main memory and is used to speed up memory access, by placing in the cache data from main memory, that is likely to be used in the near future.  A greater performance improvement may be obtained by using multiple levels of cache, with level 1 (L1) closest to the core and additional levels (L2, L3, and so on) progressively farther from the core. In this scheme, level n is smaller and faster than level n + 1. 18 COMPUTER HISTORY THE FIRST GENERATION: VACUUM TUBES IAS computer  In 1946, von Neumann and his colleagues began the design of a new stored program computer, referred to as the IAS computer, at the Princeton Institute for Advanced Studies. The IAS computer, although not completed until 1952, is the prototype of all subsequent general-purpose computers.  With rare exceptions, all of today’s computers have this same general structure and function and are thus referred to as von Neumann machines. 19 IAS COMPUTER (DESCRIBE OPERATION) IAS Memory Formats  The memory of the IAS consists of 4,096 storage locations, called words, of 40 binary digits (bits) each Both data and instructions are stored there.  Word formats. Each number is represented by a sign bit and a 39-bit value. A word may alternatively contain two 20-bit instructions.  Each instruction consisting of an 8-bit operation code (opcode) and a 12-bit address designating one of the words in memory 20 IAS CONTROL UNIT  The control unit operates the IAS by fetching instructions from memory and executing them one at a time. IAS Structure Fig reveals that both control unit and the ALU contain storage locations, called registers, defined as follows:  Memory buffer register (MBR): Contains a word to be stored in memory or sent to the I/O unit, or is used to receive a word from memory or from the I/O unit.  Memory address register (MAR): Specifies the address in memory of the word to be written from or read into the MBR.  Instruction register (IR): Contains the 8-bit opcode instruction being executed.  Instruction buffer register (IBR): Employed to hold temporarily the righthand instruction from a word in memory.  Program counter (PC): Contains the address of the next instruction pair to be fetched from memory.  Accumulator (AC) and multiplier quotient (MQ): Employed to hold temporarily operands and results of ALU operations.  The IAS operates by repetitively performing an instruction cycle: fetch & execute steps. 21 IAS INSTRUCTION SET  Convert IAS binary instruction to the assembly language  Answer:  01 0FA  21 0FB This program stores the value of content at 22 memory location 0FA into memory location 0FB IAS – LAB  Given the memory contents of the IAS computer shown below, show the assembly language code of the program. starting at address 08A. Explain what thisprogram does  Address | Contents  08A | 010FA210FB  08B | 010FA0F08D  08C | 020FA210FB  Solution: This program stores the absolute value of content at memory location 0FA into memory location 0FB 23 THE SECOND GENERATION: TRANSISTORS  The transistor, which is smaller, cheaper, and generates less heat than a vacuum tube, can be used in the same way as a vacuum tube to construct computers.  Computer Generations ULSI: refers to the integration of billions of transistors and beyond on a single chip. 24 PHOTO LITHOGRAPHY  Lithography (Greek word) means printing is done on stone.  Photolithography  Process that transfers geometric shapes from a template onto a silicon surface using light  Used in micro manufacturing applications  CPUs are made using photolithography ‫طباعة حجرية ضوئية‬, where an image of the CPU is etched ‫ حفر‬onto a piece of silicon. The exact method of how this is done is usually referred to as the process node and is measured by how small the manufacturer can make the transistors (refer as 25 nm = nanometer 10-9 m) PHOTO LITHOGRAPHY  Since smaller transistors are more power-efficient, they can do more calculations without getting too hot, which is usually the limiting factor for CPU performance.  It also allows for smaller die sizes, which reduces costs and can increase density at the same sizes, and this means more cores per chip. 7nm is effectively twice as dense as the previous 14nm node. A lower nm transistor means there is less power required for it to work and does not produce much heat (without bigger heatsinks).  A node shrink isn’t just about performance though; it also has huge implications Small transistor size = less switching for low-power mobile and laptop chips. delay = faster response time  With 7nm (compared to 14nm), you could get 25% more performance under the same power, or you could get the same performance for half the power. This means longer battery life with the same performance and much more powerful chips for smaller devices since you can effectively fit twice as much performance into the limited power target. 26 MOORE'S LAW Moore's Law is a prediction made by Gordon Moore, co-founder of Intel, in 1965. It states that the number of transistors on a microchip doubles approximately every two years, leading to an exponential increase in computing power and a decrease in relative cost.  Key Points: 1. Transistor Density: Moore observed that the density of transistors on integrated circuits was increasing rapidly, which allowed for more powerful and efficient chips. 2. Performance and Cost: As transistor counts increase, so does the performance of computers, while the cost per transistor decreases, making technology more accessible. 3. Impact on Technology: Moore's Law has driven advancements in various fields, including computing, telecommunications, and consumer electronics. It has influenced design choices, manufacturing processes, and overall industry expectations. 4. Challenges: As technology approaches physical and economic limits (e.g., heat dissipation, quantum effects), sustaining Moore's Law has become more challenging. While the doubling trend has continued for decades, there are debates about its future viability. 5. Current Status: While the exact prediction may not hold in the same way it did in the past, the principles behind it continue to influence the semiconductor industry, leading to innovations in chip architecture, materials, and fabrication 27 techniques. https://shorturl.at/vFMB9 28 T h a n k Y o u 29

Use Quizgecko on...
Browser
Browser