Full Transcript

SY 2024-2025 RELEVANT TOOLS, STANDARDS AND/OR ENGINEERING CONSTRAINTS FIRST SEMESTER SY 2024-2025 LEARNING OBJECTIVE To determine the relevant tools, standards and/or engineering constraints ▪ Design Goals ▪ Cost ▪ Performance ▪ Power Consumption ▪ Instr...

SY 2024-2025 RELEVANT TOOLS, STANDARDS AND/OR ENGINEERING CONSTRAINTS FIRST SEMESTER SY 2024-2025 LEARNING OBJECTIVE To determine the relevant tools, standards and/or engineering constraints ▪ Design Goals ▪ Cost ▪ Performance ▪ Power Consumption ▪ Instruction Set Architecture 2 DESIGN GOALS The precise shape of a computer system is determined by the constraints and objectives for which it was designed. It involves variables such as: ▪ Standards ▪ Cost ▪ Memory space ▪ Latency ▪ Throughput These are typically traded off in computer architectures 3 DESIGN GOALS Cont. Other variables are also taken into account: ▪ Features ▪ Scale ▪ Weight ▪ Reliability ▪ Expandability ▪ Power Consumption 4 COST Costs are generally kept constant and are dictated by device or commercial criteria. 5 PERFORMANCE ▪ Clock speed of a computer is often used to describe its output (MHz or GHz). This is the number of cycles per second that the CPU’s main clock runs at. A system with a higher clock rate does not always mean it will work better. ▪ Speed of a computer. Amount of cache a processor has. ▪ Processor (The faster a processor runs, the higher its speed and the larger its cache) 6 SPEED OF A COMPUTER It can be affected by the following: ▪ Number of functional units in the system. ▪ Bus speed ▪ Usable memory ▪ Type and order of instructions in the program being executed 7 TWO MAIN TYPES OF SPEED OF A COMPUTER ▪ Latency refers to the interval between the start of a process and its completion ▪ Throughput refers to the sum of work completed per unit of time. 8 LATENCY ▪ Interrupt Latency refers to the system’s guaranteed optimal response time to an electronic event. (e.g. when the disk drive finishes moving some data) Note: A wide variety of design decisions have an effect on performance – for example. A wide variety of design decisions influence performance: for example, pipelining a processor reduces latency (slower) while increasing throughput. 9 LOW INTERRUPT LATENCIES ▪ These low interrupt latencies are needed by computers that control machinery. These computers work in a real-time environment, and if an operation takes longer than expected, they will fail. Anti-virus software, for example, is computer-controlled. Computer-controlled anti-lock brakes, for example, must begin braking almost immediately after being told to do so. 10 PERFORMANCE DEPENDING ON THE APPLICATION DOMAIN Other metrics may be used to assess a computer’s success. A device may be: ▪ CPU bound (as in numerical calculation) ▪ I/O bound (as in a webserving application) ▪ Memory bound (as in a database application) (as in video editing) In servers and portable devices such as laptops, power consumption has become increasingly significant. 11 BENCHMARKING AT TEMPTS ▪ To account for all of these variables by calculating the time it takes a machine to run through a series of test programs. ▪ May reveal strengths, but it may not aid in the selection of a computer. ▪ Sometimes the devices that are being weighed break on different scales. For example: - One device may be better at handling science applications than another at playing common video games 12 BENCHMARKING AT TEMPTS ▪ Designers have also been known to include special features in their products, whether in hardware or software, that allow a particular benchmark to run quickly but do not provide similar benefits for other more general tasks. 13 POWER CONSUMPTION ▪ Designers have also been known to include special features in their products, whether in hardware or software, that allow a particular benchmark to run quickly but do not provide similar benefits for other more general tasks. 14 REFERENCES 15 THANK YOU Basic Concepts Assembly Language Programming BSCpE IV First Semester SY 2024-2025 Outline ❖ Welcome to Assembly ❖ Assembly-, Machine-, and High-Level Languages ❖ Assembly Language Programming Tools ❖ Programmer’s View of a Computer System Basic Concepts Computer Organization and Assembly Language slide 2/43 Goals and Required Background ❖ Goals: broaden student’s interest and knowledge in …  How to write assembly language programs  How high-level languages translate into assembly language  Interaction between the assembly language programs, libraries, the operating system, and the hardware ❖ Required Background  The student should already be able to program confidently in at least one high-level programming language, such as Java or C. Basic Concepts Computer Organization and Assembly Language slide 3/43 Next … ❖ Welcome ❖ Assembly-, Machine-, and High-Level Languages ❖ Assembly Language Programming Tools ❖ Programmer’s View of a Computer System ❖ Basic Computer Organization ❖ Data Representation Basic Concepts Computer Organization and Assembly Language slide 4/43 Some Important Questions to Ask ❖ What is Assembly Language? ❖ Why Learn Assembly Language? ❖ What is Machine Language? ❖ How is Assembly related to Machine Language? ❖ What is an Assembler? ❖ How is Assembly related to High-Level Language? ❖ Is Assembly Language portable? Basic Concepts Computer Organization and Assembly Language slide 5/43 A Hierarchy of Languages Basic Concepts Computer Organization and Assembly Language slide 6/43 Assembly and Machine Language ❖ Machine language  Native to a processor: executed directly by hardware  Instructions consist of binary code: 1s and 0s ❖ Assembly language  A programming language that uses symbolic names to represent operations, registers and memory locations.  Slightly higher-level language  Readability of instructions is better than machine language  One-to-one correspondence with machine language instructions ❖ Assemblers translate assembly to machine code ❖ Compilers translate high-level programs to machine code  Either directly, or  Indirectly via an assembler Basic Concepts Computer Organization and Assembly Language slide 7/43 Compiler and Assembler Basic Concepts Computer Organization and Assembly Language slide 8/43 Instructions and Machine Language ❖ Each command of a program is called an instruction (it instructs the computer what to do). ❖ Computers only deal with binary data, hence the instructions must be in binary format (0s and 1s). ❖ The set of all instructions (in binary form) makes up the computer's machine language. This is also referred to as the instruction set. Basic Concepts Computer Organization and Assembly Language slide 9/43 Instruction Fields ❖ Machine language instructions usually are made up of several fields. Each field specifies different information for the computer. The major two fields are: ❖ Opcode field which stands for operation code and it specifies the particular operation that is to be performed.  Each operation has its unique opcode. ❖ Operands fields which specify where to get the source and destination operands for the operation specified by the opcode.  The source/destination of operands can be a constant, the memory or one of the general-purpose registers. Basic Concepts Computer Organization and Assembly Language slide 10/43 Assembly vs. Machine Code Basic Concepts Computer Organization and Assembly Language slide 11/43 Translating Languages English: D is assigned the sum of A times B plus 10. High-Level Language: D = A * B + 10 A statement in a high-level language is translated typically into several machine-level instructions Intel Assembly Language: Intel Machine Language: mov eax, A A1 00404000 mul B F7 25 00404004 add eax, 10 83 C0 0A mov D, eax A3 00404008 Basic Concepts Computer Organization and Assembly Language slide 12/43 Mapping Between Assembly Language and HLL ❖ Translating HLL programs to machine language programs is not a one-to-one mapping ❖ A HLL instruction (usually called a statement) will be translated to one or more machine language instructions Basic Concepts Computer Organization and Assembly Language slide 13/43 Advantages of High-Level Languages ❖ Program development is faster  High-level statements: fewer instructions to code ❖ Program maintenance is easier  For the same above reasons ❖ Programs are portable  Contain few machine-dependent details ▪ Can be used with little or no modifications on different machines  Compiler translates to the target machine language  However, Assembly language programs are not portable Basic Concepts Computer Organization and Assembly Language slide 14/43 Why Learn Assembly Language? ❖ Accessibility to system hardware  Assembly Language is useful for implementing system software  Also useful for small embedded system applications ❖ Space and Time efficiency  Understanding sources of program inefficiency  Tuning program performance  Writing compact code ❖ Writing assembly programs gives the computer designer the needed deep understanding of the instruction set and how to design one ❖ To be able to write compilers for HLLs, we need to be expert with the machine language. Assembly programming provides this experience Basic Concepts Computer Organization and Assembly Language slide 15/43 Assembly vs. High-Level Languages ❖Some representative types of applications: Basic Concepts Computer Organization and Assembly Language slide 16/43 Next … ❖ Welcome ❖ Assembly-, Machine-, and High-Level Languages ❖ Assembly Language Programming Tools ❖ Programmer’s View of a Computer System ❖ Basic Computer Organization Basic Concepts Computer Organization and Assembly Language slide 17/43 Assembler ❖ Software tools are needed for editing, assembling, linking, and debugging assembly language programs ❖ An assembler is a program that converts source-code programs written in assembly language into object files in machine language ❖ Popular assemblers have emerged over the years for the Intel family of processors. These include …  TASM (Turbo Assembler from Borland)  NASM (Netwide Assembler for both Windows and Linux), and  GNU assembler distributed by the free software foundation Basic Concepts Computer Organization and Assembly Language slide 18/43 Linker and Link Libraries ❖ You need a linker program to produce executable files ❖ It combines your program's object file created by the assembler with other object files and link libraries, and produces a single executable program ❖ LINK32.EXE is the linker program provided with the MASM distribution for linking 32-bit programs ❖ We will also use a link library for input and output ❖ Called Irvine32.lib developed by Kip Irvine  Works in Win32 console mode under MS-Windows Basic Concepts Computer Organization and Assembly Language slide 19/43 Assemble and Link Process Source Object File Assembler File Source Object Executable File Assembler File Linker File Link Source Object Assembler Libraries File File A project may consist of multiple source files Assembler translates each source file separately into an object file Linker links all object files together with link libraries Basic Concepts Computer Organization and Assembly Language slide 20/43 Debugger ❖ Allows you to trace the execution of a program ❖ Allows you to view code, memory, registers, etc. ❖ Example: 32-bit Windows debugger Basic Concepts Computer Organization and Assembly Language slide 21/43 Editor ❖ Allows you to create assembly language source files ❖ Some editors provide syntax highlighting features and can be customized as a programming environment Basic Concepts Computer Organization and Assembly Language slide 22/43 Next … ❖ Welcome ❖ Assembly-, Machine-, and High-Level Languages ❖ Assembly Language Programming Tools ❖ Programmer’s View of a Computer System ❖ Basic Computer Organization Basic Concepts Computer Organization and Assembly Language slide 23/43 Programmer’s View of a Computer System Increased level Application Programs of abstraction High-Level Language Level 5 Assembly Language Level 4 Operating System Level 3 Instruction Set Architecture Level 2 Microarchitecture Level 1 Each level Digital Logic hides the Level 0 details of the level below it Basic Concepts Computer Organization and Assembly Language slide 24/43 Programmer's View – 2 ❖ Application Programs (Level 5)  Written in high-level programming languages  Such as Java, C++, Pascal, Visual Basic...  Programs compile into assembly language level (Level 4) ❖ Assembly Language (Level 4)  Instruction mnemonics are used  Have one-to-one correspondence to machine language  Calls functions written at the operating system level (Level 3)  Programs are translated into machine language (Level 2) ❖ Operating System (Level 3)  Provides services to level 4 and 5 programs  Translated to run at the machine instruction level (Level 2) Basic Concepts Computer Organization and Assembly Language slide 25/43 Programmer's View – 3 ❖ Instruction Set Architecture (Level 2)  Specifies how a processor functions  Machine instructions, registers, and memory are exposed  Machine language is executed by Level 1 (microarchitecture) ❖ Microarchitecture (Level 1)  Controls the execution of machine instructions (Level 2)  Implemented by digital logic (Level 0) ❖ Digital Logic (Level 0)  Implements the microarchitecture  Uses digital logic gates  Logic gates are implemented using transistors Basic Concepts Computer Organization and Assembly Language slide 26/43 Instruction Set Architecture (ISA) ❖ Collection of assembly/machine instruction set of the machine ❖ Machine resources that can be managed with these instructions  Memory  Programmer-accessible registers. ❖ Provides a hardware/software interface Basic Concepts Computer Organization and Assembly Language slide 27/43 Instruction Set Architecture (ISA) Basic Concepts Computer Organization and Assembly Language slide 28/43 COMPUTER METRICS Computer Architecture and Organization (CpE 415) First Semester AY 2024-2025 Objectives After this topic, students should be able to: Identify and compare the performance of various computers Determine the different indicators of performance determine the response time, throughput and the execution time of a computer system Quotation on Numbers(Thomson, 1824-1907) “When you can measure what you are speaking about, and express it in numbers, you know something about it. But when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind. It may be the beginning of knowledge but you have scarcely in your thoughts advanced to the state of science” Metrics These are measures of quantitative assessment commonly used for comparing, and tracking performance or production Criteria (David Lilja) To judge the metrics of computer performance: Linearity Reliability Repeatability Ease of measurement Consistency independence Linearity suggests that a metric should be linear, increasing the performance of a computer by a fraction x should be reflected by an increase of fraction x in the metric. Ex. 1. If computer A has a metric of 200, and as twice as fast as computer B, then computer B’s metric should be 100 Reliability Metric should be reliable and correctly indicate whether one computer is faster than another. It is also called monotonicity, an increase in the value of metric should indicate an increase in the speed of computer Note: This isn’t true of all metrics. Sometimes a computer may have a metric implying a higher-level performance than another computer when its performance is worse. This situation arises when there is a poor relationship between what the metric actually measures and the way in which the computer operates. Repeatability A good metric should be repeatable and always yield the same result under the same condition. Note: Not all computer systems are deterministic (responding in the same way to the same data) Ease of Measurement If it is difficult to measure a performance criterion, few users are likely to make that measurement. Moreover, if a metric is difficult to measure, an independent tester will have great difficulty in confirming it. Consistency A metric is consistent if it is precisely defined and can be applied across different systems. It should be called universality or generality to avoid confusion with repeatability. Independence A good metric is independent of commercial influences. “If computer manufacturers defined performance metrics, they might be tempted to select a criterion that shows their processor in a better light than their competitor’s processor.” Terminology for Computer Performance Efficiency Throughput Latency Relative Performance Time and Rate Efficiency A computer is always executing instructions unless it is in a halt state or suspended state The efficiency of a computer is an indication of the fraction of time that it is doing a useful work The definition of efficiency is Efficiency = total time executing useful work = optimal time total time actual time Example: If a computer takes 20 s to perform a computational task and 5 s is taken waiting for a disk that has been idle to spin up to speed, Efficiency Example: If a computer takes 20 s to perform a computational task and 5 s is taken waiting for a disk that has been idle to spin up to speed, Efficiency = 20 s = 20 = 80% (20 s + 5 s) 25 Throughput It is a measure of the amount of work it performs per unit time Example: A bus’s throughput is measured in megabits/s, whereas a computer’s throughput is measured in instructions per second. The upper limit to a system’s throughput can normally determined from basic system parameters If a computer has a 500 MHz clock and it can execute up to two instructions in parallel per clock cycle and each instruction takes 1, 2, or 4 clock cycles, then the upper limit on throughput occurs when all instructions are being executed in parallel in one cycle, that is 109 instructions/s Note: Throughput includes the term amount of work because instruction execution is meaningful only if the instructions are performing useful calculations Latency It is the delay between activating a process (for example, a memory write or a disk read, or a bus transaction) and the start of the operation) It is the waiting time It is an important consideration in the design of rotating disk memory systems where for example, you have to wait on average half a revolution for data to come under the read/write head Relative Performance It refers to how one computer performs with respect to another. The relative performance of computers A & B is the inverse of their execution times performancecomputerA execution timecomputerB PerformanceA_to_B = = performancecomputerB execution timecomputerA Relative Performance Example: System A executes a program in 105 s and system B executes the same program in 125 s. Calculate the value of n (relative performance) execution timecomputerB n= = 125/105 = 1.19 execution timecomputerA Machine A is 19% faster than B Relative Performance The objective of a computer designer is to create a system with the greatest possible throughput that is to make a better new system than the old system. The old system may be a previous machine, the same machine without the improvement, or even a competitor’s machine, and is called the reference machine or baseline machine Speedup ratio is a measure of relative performance execution time on reference machine Speedup ratio = execution time Relative Performane Speedup ratio Example: A reference machine takes 100 seconds to run a program and the test machine takes 50 seconds, what is the speedup ratio? Speedup ratio = 100/50 = 2 Clock Rate It is the most obvious indicator of a computer’s performance It is the speed at which fundamental operations are carried out within the computer. It is also considered as the worst metric by which to judge computers A poor indicator of performance because there’s no simple relationship between clock rate and system performance across different platforms. The CPU’s Clock CPU Register Clock PC Counter Frequency f = 1/T Clock period T Problem Solving The time taken by machines A, B, and C to execute a given task is A 12 m, 30 s B 8 m, 5 s C 10 m, 5 s What is the performance of each of these machines relative to machine A? Determine the speedup ratio of machines B & C if machine A is the reference Millions of Instructions per Second (MIPS) It removes the discrepancy between systems with different numbers of clocks per operation by measuring instructions per second rather than clocks per second. For a given computer, MIPS = n texecute x 106 where: n is the number of instructions executed texecute is the time taken to execute them. MIPS rating tells only how fast a computer executes instructions, but doesn’t tell what is actually achieved by the instructions being executed. The Expression z=4(x+y) Executed on Two Hypothetical Computers  Computer A has a load/store architecture without a multiplier and computer B has a memory-to-register architecture. Computer A is more verbose than B. Computer A (LOAD/STORE) Computer B (Memrory-Register) LDR) r1, (r0) ; load x LDR) r1, (r0) ; load x LDR r2, (4, r0) ; load y ADD r1, (4, r0) ; x+ y ADD r2, r1, r2 ; x+y MUL r1, #4 ; 4 (x+y) ADD r2, r2, r2 ; 2 (x+y) STR r1, (8, r0) ; store z ADD r2, r2, r2 ; 4(x+y) STR r2, (8, r0) ; ; store z MIPS Metric It is also sensitive to the way in which a compiler generates code. The duration of a single instruction is cycles x tcycle where: cycles is the number of machine cycles required to execute the instruction tcycle is the cycle time (usually the clock period) The total execution time for a program is given by: texecution = tcycle x Σni x ci where: ni is the number of times instruction i occurs in the program ci is the number of cycles required by instruction i MIPS Metric n MIPS = tcycle x Σni x ci x 106 Note: The MIPS value is affected by the instruction mix (i.e., the nature of ni) and the length of each instruction executed (i.e., the ci term) MIPS Metric Example: A program is compiled to run on a computer and the compiler generates two million one-cycle instructions and one million two-cycle instructions. If we assume that the cycle time is 10 ns, the time taken is given by 2 x 106 x 1 x 10ns + 1 x 106 x 2 x 10ns = 4 x 106 x 10ns = 4 x 10-2s Suppose that a different compiler generates code for the same problem but with 1.5 million one-cycle instructions and 1.2 million two-cycle instructions. 1.5 x 106 x 1 x 10ns + 1.2 x 106 x 2 x 10ns = 3.9 x 106 x 10ns = 3.9 x 10-2s MIPS Metric To evaluate MIPS for each case: n MIPS = tcycle x Σni x ci x 106 = 3 x 106/(10ns x (2 x 106 x 1 + 1 x 106 x 2) x 106 = 0.75 x 102 = 75 MIPS for case 1 n MIPS = tcycle x Σni x ci x 106 = 2.7 x 106/(10ns x (1.5 x 106 x 1 + 1.2 x106 x 2) x 106) = 0.69 x 10-2 = 69 MIPS for case 2 Problem Solving A program is run on a computer with the following parameters: Clock cycle time: 10ns Total instructions: 4 x 106 Instructions with 1 cycle 70% Instructions with 2 cycles 20% Instructions with 3 cycles 10% What is the MIPS rating of this computer? COMPUTER SYSTEM ARCHITECTURE First Semester AY 2024-2025 Objectives After this topic, students should be able to: Differentiate Computer Organization from Computer Architecture Identify the rapid changes in implementation technology Determine the distribution of power in integrated circuits Analyze and determine the major factors that influence the cost of a computer and how these factors change over time Architecture the practice of building design and its resulting products; customary usage refers only to those designs and structures that are culturally significant. (Microsoft ® Encarta) Architecture (Computer Science) a general term referring to the structure of all or part of a computer system. It covers the design of system software, such as the operating system (the program that controls the computer) It refers to the combination of hardware and basic software that links the machines on a computer network. Computer Systems Architecture It refers to an entire structure and to the details needed to make it functional. It covers computer systems, microprocessors, circuits, and system programs. It implies structure, how the elements of a computer fit together. Computer Organization It represents the implementation of the computer architecture It refers to the computer’s implementation in terms of its actual hardware. Factors Affecting the Computer Designer Architecturer Technology System Computer Applications Organization Tools Technology indicates the importance of the processes used to manufacture computer components Applications refer to the end use of the computer Tools cover a range of software products, from packages that perform hardware design @ the circuit level, to computer simulator, to suites of programs used as benchmarks or test cases to compare the speeds of different computers Computer Technologies Application Semiconductor Buses Technology Computer technology Optical technology Magnetic Materials Peripherals Device technologies determine the speed of the computer and the capacity of its memory system. Semiconductor Technologies are used to fabricate the processor and its main memory Magnetic Technologies are used for hard disks Optical Technologies are used for CD-ROMS, DVD, Blu-ray drives and network links Buses in which their structure, organization and control affect the performance of a computer Peripherals Computer Architect Determines what attributes are important for a new computer, then design a computer to maximize performance while staying within cost, power, and availability constraints His task has many aspects: - instruction set design, - functional organization, - logic design, - implementation ASK YOURSELF Why is the performance of a computer so dependent on a range of technologies such as semiconductor, magnetic, optical, chemical, and so on? Think of at least two types of computer system and compare their performance EVOLUTION OF COMPUTERS COMPUTER ARCHITECTURE AND ORGANIZATION LEARNING OBJECTIVES Learn the brief history of evolution of computers Determine how the computer technology develops over the next THE EARLY YEARS THE EARLY YEARS FIRST GENERATION COMPUTERS (1940-1956) VACUUM TUBE SECOND GENERATION COMPUTERS (1956-1963) THIRD GENERATION COMPUTERS (1964-1971) THE ADVANTAGES OF IC SOFTWARE TECHNOLOGY FOURTH GENERATION COMPUTERS (1971- PRESENT) CONT. FOURTH GENERATION COMPUTERS (1971- PRESENT) CONT. FOURTH GENERATION COMPUTERS (1971- PRESENT) ADVANTAGES FIFTH GENERATION COMPUTERS (PRESENT AND BEYOND) NEW ERA COMPUTERS

Use Quizgecko on...
Browser
Browser