(The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-24-101-9-11.pdf
Document Details
Uploaded by Deleted User
Tags
Related
- (The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-24-101-1-9 copy.pdf
- Computer Organization and Design RISC-V Edition PDF
- (The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-24-101-39-41.pdf
- (The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-102-258.pdf
- (The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-102-258-pages-4.pdf
- (The Morgan Kaufmann Series in Computer Architecture and Design) David A. Patterson, John L. Hennessy - Computer Organization and Design RISC-V Edition_ The Hardware Software Interface-Morgan Kaufmann-102-258-pages-5.pdf
Full Transcript
10 Chapter 1 Computer Abstractions and Technology Check Check Yourself sections are designed to help readers assess whether they Yourself comprehend the major concepts introduced in a chapter and understand the implications of those concepts. Some Check...
10 Chapter 1 Computer Abstractions and Technology Check Check Yourself sections are designed to help readers assess whether they Yourself comprehend the major concepts introduced in a chapter and understand the implications of those concepts. Some Check Yourself questions have simple answers; others are for discussion among a group. Answers to the specific questions can be found at the end of the chapter. Check Yourself questions appear only at the end of a section, making it easy to skip them if you are sure you understand the material. 1. The number of embedded processors sold every year greatly outnumbers the number of PC and even post-PC processors. Can you confirm or deny this insight based on your own experience? Try to count the number of embedded processors in your home. How does it compare with the number of conventional computers in your home? 2. As mentioned earlier, both the software and hardware affect the performance of a program. Can you think of examples where each of the following is the right place to look for a performance bottleneck? The algorithm chosen The programming language or compiler The operating system The processor The I/O system and devices Seven Great Ideas in Computer 1.2 Architecture We now introduce seven great ideas that computer architects have invented in the last 60 years of computer design. These ideas are so powerful they have lasted long after the first computer that used them, with newer architects demonstrating their admiration by imitating their predecessors. These great ideas are themes that we will weave through this and subsequent chapters as examples arise. To point out their influence, in this section we introduce icons and highlighted terms that represent the great ideas and we use them to identify the nearly 100 sections of the book that feature use of the great ideas. 1.2 Seven Great Ideas in Computer Architecture 11 Use Abstraction to Simplify Design Both computer architects and programmers had to invent techniques to make themselves more productive, for otherwise design time would lengthen as dramatically as resources grew. A major productivity technique for hardware and software is to use abstractions to characterize the design at different levels of representation; lower-level details are hidden to offer a simpler model at higher levels. We’ll use the abstract painting icon to represent this second great idea. Make the Common Case Fast Making the common case fast will tend to enhance performance better than optimizing the rare case. Ironically, the common case is often simpler than the rare case and hence is usually easier to enhance. This common sense advice implies that you know what the common case is, which is only possible with careful experimentation and measurement (see Section 1.6). We use a sports car as the icon for making the common case fast, as the most common trip has one or two passengers, and it’s surely easier to make a fast sports car than a fast minivan! Performance via Parallelism Since the dawn of computing, computer architects have offered designs that get more performance by computing operations in parallel. We’ll see many examples of parallelism in this book. We use multiple jet engines of a plane as our icon for parallel performance. Performance via Pipelining A particular pattern of parallelism is so prevalent in computer architecture that it merits its own name: pipelining. For example, before fire engines, a “bucket brigade” would respond to a fire, which many cowboy movies show in response to a dastardly act by the villain. The townsfolk form a human chain to carry a water source to fire, as they could much more quickly move buckets up the chain instead of individuals running back and forth. Our pipeline icon is a sequence of pipes, with each section representing one stage of the pipeline. Performance via Prediction Following the saying that it can be better to ask for forgiveness than to ask for permission, the next great idea is prediction. In some cases, it can be faster on average to guess and start working rather than wait until you know for sure, assuming that the mechanism to recover from a misprediction is not too expensive and your prediction is relatively accurate. We use the fortune-teller’s crystal ball as our prediction icon. 12 Chapter 1 Computer Abstractions and Technology Hierarchy of Memories Programmers want the memory to be fast, large, and cheap, as memory speed often shapes performance, capacity limits the size of problems that can be solved, and the cost of memory today is often the majority of computer cost. Architects have found that they can address these conflicting demands with a hierarchy of memories, with the fastest, smallest, and the most expensive memory per bit at the top of the hierarchy and the slowest, largest, and cheapest per bit at the bottom. As we shall see in Chapter 5, caches give the programmer the illusion that main memory is almost as fast as the top of the hierarchy and nearly as big and cheap as the bottom of the hierarchy. We use a layered triangle icon to represent the memory hierarchy. The shape indicates speed, cost, and size: the closer to the top, the faster and more expensive per bit the memory; the wider the base of the layer, the bigger the memory. Dependability via Redundancy Computers not only need to be fast; they need to be dependable. Since any physical device can fail, we make systems dependable by including redundant components that can take over when a failure occurs and to help detect failures. We use the tractor-trailer as our icon, since the dual tires on each side of its rear axles allow the truck to continue driving even when one tire fails. (Presumably, the truck driver heads immediately to a repair facility so the flat tire can be fixed, thereby restoring redundancy!) In the prior edition, we listed an eighth great idea, which was “Designing for Moore’s Law.” Gordon Moore, one of the founders of Intel, made a remarkable prediction in 1965: integrated circuit resources would double every year. A decade later he amended his prediction to doubling every two years. His prediction was accurate, and for 50 years Moore’s law shaped computer architecture. As computer designs can take years, the resources available per chip (“transistors”; see page 25) could easily double or triple between the start and finish of the project. Like a skeet shooter, computer architects had to anticipate where the technology would be when the design was finished rather than design for when it began. Alas, no exponential growth can last forever, and Moore’s law is no longer accurate. The slowing of Moore’s law is shocking for computer designers who have long leveraged it. Some do not want to believe it is over despite substantial evidence to the contrary. Part of the reason is confusion between saying that Moore’s prediction of a biannual doubling rate is now incorrect and claiming that semiconductors will no longer improve. Semiconductor technology will continue to improve but more slowly than in the past. Starting with this edition, we will discuss the implications of the slowing of Moore’s law, especially in Chapter 6. Elaboration: During the heyday of Moore’s law, the cost per chip resource dropped with each new technology generation. For the latest technologies, the cost per resource may be flat or even rising with each new generation due to the cost of new equipment, elaborate processes invented to make chips work at finer feature sizes, and reduced number of companies investing in these new technologies to push the state of the art. Less competition naturally leads to higher prices.