(The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-24-101-52-53.pdf

Full Transcript

1.12 Concluding Remarks 53 The CPI varied by a factor of 4 for SPECspeed 2017 Integer on an Intel Xeon computer in Figure 1.18, so MIPS does as well. Finally, and most importantly, if a new program execut...

1.12 Concluding Remarks 53 The CPI varied by a factor of 4 for SPECspeed 2017 Integer on an Intel Xeon computer in Figure 1.18, so MIPS does as well. Finally, and most importantly, if a new program executes more instructions but each instruction is faster, MIPS can vary independently from performance! Consider the following performance measurements for a program: Measurement Computer A Computer B Check Instruction count 10 billion 8 billion Yourself Clock rate 4 GHz 4 GHz CPI 1.0 1.1 a. Which computer has the higher MIPS rating? b. Which computer is faster? 1.12 Concluding Remarks Where … the ENIAC is equipped with Although it is difficult to predict exactly what level of cost/performance computers 18,000 vacuum tubes will have in the future, it’s a safe bet that they will be much better than they are and weighs 30 tons, today. To participate in these advances, computer designers and programmers computers in the must understand a wider variety of issues. future may have 1,000 Both hardware and software designers construct computer systems in hierarchical vacuum tubes and layers, with each lower layer hiding details from the level above. This great idea of perhaps weigh just abstraction is fundamental to understanding today’s computer systems, but it does not 1½ tons. mean that designers can limit themselves to knowing a single abstraction. Perhaps the Popular Mechanics, most important example of abstraction is the interface between hardware and low-level March 1949 software, called the instruction set architecture. Maintaining the instruction set architecture as a constant enables many implementations of that architecture—presumably varying in cost and performance—to run identical software. On the downside, the architecture may preclude introducing innovations that require the interface to change. There is a reliable method of determining and reporting performance by using the execution time of real programs as the metric. This execution time is related to other important measurements we can make by the following equation: Seconds Instructions Clock cycles Seconds    Program Program Instruction Clock cycle We will use this equation and its constituent factors many times. Remember, though, that individually the factors do not determine performance: only the product, which equals execution time, is a reliable measure of performance. 54 Chapter 1 Computer Abstractions and Technology The BIG Execution time is the only valid and unimpeachable measure of performance. Many other metrics have been proposed and found wanting. Picture Sometimes these metrics are flawed from the start by not reflecting execution time; other times a metric that is sound in a limited context is extended and used beyond that context or without the additional clarification needed to make it valid. The key hardware technology for modern processors is silicon. While silicon fuels the rapid advance of hardware, new ideas in the organization of computers have improved price/performance. Two of the key ideas are exploiting parallelism in the program, normally today via multiple processors, and exploiting locality of accesses to a memory hierarchy, typically via caches. Energy efficiency has replaced die area as the most critical resource of microprocessor design. Conserving power while trying to increase performance has forced the hardware industry to switch to multicore microprocessors, thereby requiring the software industry to switch to programming parallel hardware. Parallelism is now required for performance. Computer designs have always been measured by cost and performance, as well as other important factors such as energy, dependability, cost of ownership, and scalability. Although this chapter has focused on cost, performance, and energy, the best designs will strike the appropriate balance for a given market among all the factors. Road Map for This Book At the bottom of these abstractions is the five classic components of a computer: datapath, control, memory, input, and output (refer to Figure 1.5). These five components also serve as the framework for the rest of the chapters in this book: Datapath: Chapter 3, Chapter 4, Chapter 6, and Appendix B Control: Chapter 4, Chapter 6, and Appendix B Memory: Chapter 5 Input: Chapters 5 and 6 Output: Chapters 5 and 6 As mentioned above, Chapter 4 describes how processors exploit implicit parallelism, Chapter 6 describes the explicitly parallel multicore microprocessors that are at the heart of the parallel revolution, and Appendix B describes the highly parallel graphics processor chip. Chapter 5 describes how a memory hierarchy exploits locality. Chapter 2 describes instruction sets—the interface between compilers and the computer—and emphasizes the role of compilers and programming languages in using the features of the instruction set. Chapter 3 describes how computers handle arithmetic data. Appendix A introduces logic design.

Use Quizgecko on...
Browser
Browser