Lecture 1 Computer Architecture.pdf
Document Details
Uploaded by Deleted User
Full Transcript
COMPUTER ARCHITECTURE AND ORGANIZATION LECTURE 1 Computer architecture Computer architecture refers to the design of a computer system, including the hardware components and their interconnections, the instruction set architecture (ISA), and the system's performance charac...
COMPUTER ARCHITECTURE AND ORGANIZATION LECTURE 1 Computer architecture Computer architecture refers to the design of a computer system, including the hardware components and their interconnections, the instruction set architecture (ISA), and the system's performance characteristics. Computer architecture focuses on the high- level design of the system, such as the overall organization of the processor, memory hierarchy, input/output system, and the communication channels between these components. It also deals with the design of the instruction set, which is the interface between the hardware and software of the system. Computer Organization computer organization refers to the operational units and their interconnections that realize the architectural specifications. Computer organization deals with the low- level design of the system, such as the logical circuit design of the processor, the memory organization, the peripheral devices, and the data path of the computer. It also deals with the implementation of the instruction set architecture, including the design of the instruction decoder, the control unit, and the execution units. Example Assume you are a programmer and you are writing a program. Also suppose your program has the following features: 1- Loads data from memory 2- Stores that data in integer variables 3- Performs some calculations and then outputs that data on computer screen. Programming example To write such a program, programmers are only concerned with the details visible to them – Computer Architecture. Therefore programmers do not care about the conversion of keywords input to useful computer instructions. Nor do we care about the underlying RAM technology used. For this reason we say that organization of components is invisible to the programmer. In essence, computer architecture defines what the computer does, while computer organization defines how it does it. In other words, computer architecture is concerned with the big picture, while computer organization is concerned with the details of the system's design and implementation. Examples of computer organization include: Processor design: This includes the logical circuit design of the processor, such as the arithmetic logic unit (ALU), the control unit, and the register file. Memoryorganization: This includes the design of the memory hierarchy, such as the cache memory, main memory, and secondary storage. Input/outputsystems: This includes the design of the interfaces between the computer and its peripheral devices, such as the keyboard, mouse, and display. Components’ and Functions of a computer Central Processing Unit (CPU) Made up of: Control Unit Arithmetic and Logic units Registers MEMORY Made up of: ROM (Read Only Memory) RAM (Random Access Memory) Secondary Storage INPUT/OUTPUT UNITS Self study: Computer Components function(s)/purpose(s) Strategies for different computer architectures Many computer architectures have ways of categorizing them: (i)By Number of instructions executed per clock cycle Many Computer Machines read and execute one instruction at a time. (i)By Number of instructions executed per clock cycle a) Von Neumann machines These include: Complex Instruction Set Computer (CISC) Reduced Instruction Set Computer(RISC) Minimal instruction set computer (MISC) Transport Triggered Architecture(TTA) Digital Signal Processor(DSP) Others also include: Accumulator machines Register machines Stack machines Executing several instructions per clock Other computers read and execute several instructions at a time(clock cycle) Very long instruction word(VLIW), Super- Scalar execute a minimum of two instructions per clock cycle ii) Categorization based on connections between CPU and Main Memory These are Princeton Machines They use unified memory where a single address corresponds to a single place in memory One can use that address to read and write data, or can load that address in a program counter to execute a code iii. Categorization according to memory spaces b) Harvard Machines Program memory Data memory HARVARD ARCHITECTURE Harvard architecture: This computer architecture has separate memories for data and instructions, which allows the CPU to fetch instructions and data simultaneously. Harvard architecture instruction and data flow What is Harvard Architecture? It is a type of computer architecture that keeps programme data and instructions in storage and signal channels that are physically independent from one another. In contrast to the Von Neumann design, which retrieves instructions from memory using a single bus while simultaneously moving data from one component of a computer to another, the Harvard architecture keeps data and instructions in their own distinct memory spaces. Both ideas are comparable, with the exception of the means through which they access memory. The Harvard architecture is based on the concept of separating the memory into two distinct sections, with one section dedicated to storing data and the other to storing programmes. This architecture is commonly used in embedded systems and digital signal processors. To work with Harvard architecture, it is important to optimize the memory hierarchy for separate instruction and data access. This can be achieved by using separate instruction and data caches, and optimizing the instruction decoder to minimize the number of instructions fetched. Von Neumann architecture What is Von Neumann Architecture? Thisis a widely used computer architecture that consists of a single shared memory for both data and instructions. The CPU fetches instructions and data from the memory and stores the results back to the memory. This is a hypothetical architecture that is derived from the idea of stored-program computers, in which the memory is used for storing both the program data and the instruction data. Prior to the introduction of the Von Neumann idea of computer design, computing machines were developed for a single predetermined function, and their level of sophistication was limited as a result of the need for human rewiring of the circuitry. The ability to keep instructions in the memory along with the data that the instructions operate on is the central tenet of the Von Neumann architectures. This architecture is commonly used in personal computers and servers. To work with Von Neumann architecture, it is important to optimize the system's memory hierarchy to reduce memory access latency. This can be achieved by using caches, pipelining, and prefetching techniques. Also, the instruction set should be optimized for efficient execution of common tasks. C) ARM Architecture: This architecture is commonly used in mobile devices and embedded systems. To work with ARM architecture, it is important to optimize the system's power consumption while maintaining good performance. Thiscan be achieved by using power-efficient processors, optimizing the instruction set for energy-efficient execution, and reducing the frequency of memory accesses. d) GPU Architecture: This architecture is commonly used in graphics-intensive applications and scientific computing. To work with GPU architecture, it is important to optimize the application for parallel execution using threads and blocks, and to use specialized libraries such as CUDA or OpenCL for efficient execution on the GPU. e) Quantum Architecture: This architecture is a new and emerging technology that is being developed for quantum computing. To work with quantum architecture, it is important to optimize the algorithms and applications for quantum execution, which requires a fundamentally different approach to problem-solving than classical computing.