2ND Ed TB PDF - Introduction to Digital Circuit Design
Document Details
2002
Tags
Summary
This textbook introduces digital circuit design, providing a historical overview of its development from mechanical to modern electronic systems. It explores key issues in digital integrated circuit design and outlines quality measurement metrics. The document highlights the transition from vacuum tubes to transistors and ultimately to MOSFETs, which led to the digital integrated circuit revolution.
Full Transcript
chapter1.fm Page 9 Friday, January 18, 2002 8:58 AM CHAPTER 1 INTRODUCTION...
chapter1.fm Page 9 Friday, January 18, 2002 8:58 AM CHAPTER 1 INTRODUCTION The evolution of digital circuit design n Compelling issues in digital circuit design n How to measure the quality of a design n Valuable references 1.1 A Historical Perspective 1.2 Issues in Digital Integrated Circuit Design 1.3 Quality Metrics of a Digital Design 1.4 Summary 1.5 To Probe Further 9 chapter1.fm Page 10 Friday, January 18, 2002 8:58 AM 10 INTRODUCTION Chapter 1 1.1 A Historical Perspective The concept of digital data manipulation has made a dramatic impact on our society. One has long grown accustomed to the idea of digital computers. Evolving steadily from main- frame and minicomputers, personal and laptop computers have proliferated into daily life. More significant, however, is a continuous trend towards digital solutions in all other areas of electronics. Instrumentation was one of the first noncomputing domains where the potential benefits of digital data manipulation over analog processing were recognized. Other areas such as control were soon to follow. Only recently have we witnessed the con- version of telecommunications and consumer electronics towards the digital format. Increasingly, telephone data is transmitted and processed digitally over both wired and wireless networks. The compact disk has revolutionized the audio world, and digital video is following in its footsteps. The idea of implementing computational engines using an encoded data format is by no means an idea of our times. In the early nineteenth century, Babbage envisioned large- scale mechanical computing devices, called Difference Engines [Swade93]. Although these engines use the decimal number system rather than the binary representation now common in modern electronics, the underlying concepts are very similar. The Analytical Engine, developed in 1834, was perceived as a general-purpose computing machine, with features strikingly close to modern computers. Besides executing the basic repertoire of operations (addition, subtraction, multiplication, and division) in arbitrary sequences, the machine operated in a two-cycle sequence, called “store” and “mill” (execute), similar to current computers. It even used pipelining to speed up the execution of the addition opera- tion! Unfortunately, the complexity and the cost of the designs made the concept impracti- cal. For instance, the design of Difference Engine I (part of which is shown in Figure 1.1) required 25,000 mechanical parts at a total cost of £17,470 (in 1834!). Figure 1.1 Working part of Babbage’s Difference Engine I (1832), the first known automatic calculator (from [Swade93], courtesy of the Science Museum of London). chapter1.fm Page 11 Friday, January 18, 2002 8:58 AM Section 1.1 A Historical Perspective 11 The electrical solution turned out to be more cost effective. Early digital electronics systems were based on magnetically controlled switches (or relays). They were mainly used in the implementation of very simple logic networks. Examples of such are train safety systems, where they are still being used at present. The age of digital electronic computing only started in full with the introduction of the vacuum tube. While originally used almost exclusively for analog processing, it was realized early on that the vacuum tube was useful for digital computations as well. Soon complete computers were realized. The era of the vacuum tube based computer culminated in the design of machines such as the ENIAC (intended for computing artillery firing tables) and the UNIVAC I (the first successful commercial computer). To get an idea about integration density, the ENIAC was 80 feet long, 8.5 feet high and several feet wide and incorporated 18,000 vacuum tubes. It became rapidly clear, however, that this design technology had reached its limits. Reliability problems and excessive power consumption made the implementation of larger engines economically and practically infeasible. All changed with the invention of the transistor at Bell Telephone Laboratories in 1947 [Bardeen48], followed by the introduction of the bipolar transistor by Schockley in 1949 [Schockley49]1. It took till 1956 before this led to the first bipolar digital logic gate, introduced by Harris [Harris56], and even more time before this translated into a set of integrated-circuit commercial logic gates, called the Fairchild Micrologic family [Norman60]. The first truly successful IC logic family, TTL (Transistor-Transistor Logic) was pioneered in 1962 [Beeson62]. Other logic families were devised with higher perfor- mance in mind. Examples of these are the current switching circuits that produced the first subnanosecond digital gates and culminated in the ECL (Emitter-Coupled Logic) family [Masaki74]. TTL had the advantage, however, of offering a higher integration density and was the basis of the first integrated circuit revolution. In fact, the manufacturing of TTL components is what spear-headed the first large semiconductor companies such as Fair- child, National, and Texas Instruments. The family was so successful that it composed the largest fraction of the digital semiconductor market until the 1980s. Ultimately, bipolar digital logic lost the battle for hegemony in the digital design world for exactly the reasons that haunted the vacuum tube approach: the large power con- sumption per gate puts an upper limit on the number of gates that can be reliably integrated on a single die, package, housing, or box. Although attempts were made to develop high integration density, low-power bipolar families (such as I2L—Integrated Injection Logic [Hart72]), the torch was gradually passed to the MOS digital integrated circuit approach. The basic principle behind the MOSFET transistor (originally called IGFET) was proposed in a patent by J. Lilienfeld (Canada) as early as 1925, and, independently, by O. Heil in England in 1935. Insufficient knowledge of the materials and gate stability prob- lems, however, delayed the practical usability of the device for a long time. Once these were solved, MOS digital integrated circuits started to take off in full in the early 1970s. Remarkably, the first MOS logic gates introduced were of the CMOS variety [Wanlass63], and this trend continued till the late 1960s. The complexity of the manufac- turing process delayed the full exploitation of these devices for two more decades. Instead, 1 An intriguing overview of the evolution of digital integrated circuits can be found in [Murphy93]. (Most of the data in this overview has been extracted from this reference). It is accompanied by some of the his- torically ground-breaking publications in the domain of digital IC’s. chapter1.fm Page 12 Friday, January 18, 2002 8:58 AM 12 INTRODUCTION Chapter 1 the first practical MOS integrated circuits were implemented in PMOS-only logic and were used in applications such as calculators. The second age of the digital integrated cir- cuit revolution was inaugurated with the introduction of the first microprocessors by Intel in 1972 (the 4004) [Faggin72] and 1974 (the 8080) [Shima74]. These processors were implemented in NMOS-only logic, which has the advantage of higher speed over the PMOS logic. Simultaneously, MOS technology enabled the realization of the first high- density semiconductor memories. For instance, the first 4Kbit MOS memory was intro- duced in 1970 [Hoff70]. These events were at the start of a truly astounding evolution towards ever higher integration densities and speed performances, a revolution that is still in full swing right now. The road to the current levels of integration has not been without hindrances, how- ever. In the late 1970s, NMOS-only logic started to suffer from the same plague that made high-density bipolar logic unattractive or infeasible: power consumption. This realization, combined with progress in manufacturing technology, finally tilted the balance towards the CMOS technology, and this is where we still are today. Interestingly enough, power consumption concerns are rapidly becoming dominant in CMOS design as well, and this time there does not seem to be a new technology around the corner to alleviate the problem. Although the large majority of the current integrated circuits are implemented in the MOS technology, other technologies come into play when very high performance is at stake. An example of this is the BiCMOS technology that combines bipolar and MOS devices on the same die. BiCMOS is used in high-speed memories and gate arrays. When even higher performance is necessary, other technologies emerge besides the already men- tioned bipolar silicon ECL family—Gallium-Arsenide, Silicon-Germanium and even superconducting technologies. These technologies only play a very small role in the over- all digital integrated circuit design scene. With the ever increasing performance of CMOS, this role is bound to be further reduced with time. Hence the focus of this textbook on CMOS only. 1.2 Issues in Digital Integrated Circuit Design Integration density and performance of integrated circuits have gone through an astound- ing revolution in the last couple of decades. In the 1960s, Gordon Moore, then with Fair- child Corporation and later cofounder of Intel, predicted that the number of transistors that can be integrated on a single die would grow exponentially with time. This prediction, later called Moore’s law, has proven to be amazingly visionary [Moore65]. Its validity is best illustrated with the aid of a set of graphs. Figure 1.2 plots the integration density of both logic IC’s and memory as a function of time. As can be observed, integration com- plexity doubles approximately every 1 to 2 years. As a result, memory density has increased by more than a thousandfold since 1970. An intriguing case study is offered by the microprocessor. From its inception in the early seventies, the microprocessor has grown in performance and complexity at a steady and predictable pace. The transistor counts for a number of landmark designs are collected in Figure 1.3. The million-transistor/chip barrier was crossed in the late eighties. Clock frequencies double every three years and have reached into the GHz range. This is illus- chapter1.fm Page 13 Friday, January 18, 2002 8:58 AM Section 1.2 Issues in Digital Integrated Circuit Design 13 64 Gbits 1010 *0.08µm Human memory Human memory Human DNA 4 Gbits 109 Human DNA 0.15µm 1 Gbits 0.15-0.2µm 108 Number of bits per chip 256 Mbits 0.25-0.3µm 107 64 Mbits 0.35-0.4µm 106 Book 16 Mbits Book 0.5-0.6µm 105 4 Mbits 0.7-0.8µm 1 Mbits 1.0-1.2µm 104 256 Kbits Encyclopedia 1.6-2.4µm Encyclopedia 2 hrs CD Audio 2 hrs CD Audio 30 30sec secHDTV HDTV 64 Kbits Page Page 1970 1980 1990 2000 2010 Year (a) Trends in logic IC complexity (b) Trends in memory complexity Figure 1.2 Evolution of integration complexity of logic ICs and memories as a function of time. trated in Figure 1.4, which plots the microprocessor trends in terms of performance at the beginning of the 21st century. An important observation is that, as of now, these trends have not shown any signs of a slow-down. It should be no surprise to the reader that this revolution has had a profound impact on how digital circuits are designed. Early designs were truly hand-crafted. Every transis- tor was laid out and optimized individually and carefully fitted into its environment. This is adequately illustrated in Figure 1.5a, which shows the design of the Intel 4004 micro- processor. This approach is, obviously, not appropriate when more than a million devices have to be created and assembled. With the rapid evolution of the design technology, time-to-market is one of the crucial factors in the ultimate success of a component. 100000000 Pentium 4 Pentium III 10000000 Pentium II Pentium ® Transistors 1000000 486 386 100000 286 ™ 8086 10000 8080 4004 8008 1000 1970 1975 1980 1985 1990 1995 2000 Year of Introduction Figure 1.3 Historical evolution of microprocessor transistor count (from [Intel01]). chapter1.fm Page 14 Friday, January 18, 2002 8:58 AM 14 INTRODUCTION Chapter 1 10000 Doubles every 1000 2 years Frequency (Mhz) 100 P6 Pentium ® proc 486 10 386 286 8086 8085 1 8080 8008 4004 0.1 1970 1980 1990 2000 2010 Figure 1.4 Microprocessor performance trends at the beginning of the 21st century. Year Designers have, therefore, increasingly adhered to rigid design methodologies and strate- gies that are more amenable to design automation. The impact of this approach is apparent from the layout of one of the later Intel microprocessors, the Pentium® 4, shown in Figure 1.5b. Instead of the individualized approach of the earlier designs, a circuit is constructed in a hierarchical way: a processor is a collection of modules, each of which consists of a number of cells on its own. Cells are reused as much as possible to reduce the design effort and to enhance the chances for a first-time-right implementation. The fact that this hierar- chical approach is at all possible is the key ingredient for the success of digital circuit design and also explains why, for instance, very large scale analog design has never caught on. The obvious next question is why such an approach is feasible in the digital world and not (or to a lesser degree) in analog designs. The crucial concept here, and the most important one in dealing with the complexity issue, is abstraction. At each design level, the internal details of a complex module can be abstracted away and replaced by a black box view or model. This model contains virtually all the information needed to deal with the block at the next level of hierarchy. For instance, once a designer has implemented a multiplier module, its performance can be defined very accurately and can be captured in a model. The performance of this multiplier is in general only marginally influenced by the way it is utilized in a larger system. For all purposes, it can hence be considered a black box with known characteristics. As there exists no compelling need for the system designer to look inside this box, design complexity is substantially reduced. The impact of this divide and conquer approach is dramatic. Instead of having to deal with a myriad of elements, the designer has to consider only a handful of components, each of which are characterized in performance and cost by a small number of parameters. This is analogous to a software designer using a library of software routines such as input/output drivers. Someone writing a large program does not bother to look inside those library routines. The only thing he cares about is the intended result of calling one of those modules. Imagine what writing software programs would be like if one had to fetch every bit individually from the disk and ensure its correctness instead of relying on handy “file open” and “get string” operators. chapter1.fm Page 15 Friday, January 18, 2002 8:58 AM Section 1.2 Issues in Digital Integrated Circuit Design 15 (a) The 4004 microprocessor Standard Cell Module Memory Module (b) The Pentium ® 4 microprocessor Figure 1.5 Comparing the design methodologies of the Intel 4004 (1971) and Pentium ® 4 (2000 microprocessors (reprinted with permission from Intel). chapter1.fm Page 16 Friday, January 18, 2002 8:58 AM 16 INTRODUCTION Chapter 1 Typically used abstraction levels in digital circuit design are, in order of increasing abstraction, the device, circuit, gate, functional module (e.g., adder) and system levels (e.g., processor), as illustrated in Figure 1.6. A semiconductor device is an entity with a SYSTEM MODULE + GATE CIRCUIT DEVICE G S D + + n n Figure 1.6 Design abstraction levels in digital circuits. very complex behavior. No circuit designer will ever seriously consider the solid-state physics equations governing the behavior of the device when designing a digital gate. Instead he will use a simplified model that adequately describes the input-output behavior of the transistor. For instance, an AND gate is adequately described by its Boolean expres- sion (Z = A.B), its bounding box, the position of the input and output terminals, and the delay between the inputs and the output. This design philosophy has been the enabler for the emergence of elaborate com- puter-aided design (CAD) frameworks for digital integrated circuits; without it the current design complexity would not have been achievable. Design tools include simulation at the various complexity levels, design verification, layout generation, and design synthesis. An overview of these tools and design methodologies is given in Chapter 8 of this textbook. Furthermore, to avoid the redesign and reverification of frequently used cells such as basic gates and arithmetic and memory modules, designers most often resort to cell libraries. These libraries contain not only the layouts, but also provide complete docu- mentation and characterization of the behavior of the cells. The use of cell libraries is, for chapter1.fm Page 17 Friday, January 18, 2002 8:58 AM Section 1.2 Issues in Digital Integrated Circuit Design 17 instance, apparent in the layout of the Pentium ® 4 processor (Figure 1.5b). The integer and floating-point unit, just to name a few, contain large sections designed using the so- called standard cell approach. In this approach, logic gates are placed in rows of cells of equal height and interconnected using routing channels. The layout of such a block can be generated automatically given that a library of cells is available. The preceding analysis demonstrates that design automation and modular design practices have effectively addressed some of the complexity issues incurred in contempo- rary digital design. This leads to the following pertinent question. If design automation solves all our design problems, why should we be concerned with digital circuit design at all? Will the next-generation digital designer ever have to worry about transistors or para- sitics, or is the smallest design entity he will ever consider the gate and the module? The truth is that the reality is more complex, and various reasons exist as to why an insight into digital circuits and their intricacies will still be an important asset for a long time to come. First of all, someone still has to design and implement the module libraries. Semi- conductor technologies continue to advance from year to year. Until one has devel- oped a fool-proof approach towards “porting” a cell from one technology to another, each change in technology—which happens approximately every two years—requires a redesign of the library. Creating an adequate model of a cell or module requires an in-depth understanding of its internal operation. For instance, to identify the dominant performance parame- ters of a given design, one has to recognize the critical timing path first. The library-based approach works fine when the design constraints (speed, cost or power) are not stringent. This is the case for a large number of application-specific designs, where the main goal is to provide a more integrated system solution, and performance requirements are easily within the capabilities of the technology. Unfortunately for a large number of other products such as microprocessors, success hinges on high performance, and designers therefore tend to push technology to its limits. At that point, the hierarchical approach tends to become somewhat less attractive. To resort to our previous analogy to software methodologies, a program- mer tends to “customize” software routines when execution speed is crucial; com- pilers—or design tools—are not yet to the level of what human sweat or ingenuity can deliver. Even more important is the observation that the abstraction-based approach is only correct to a certain degree. The performance of, for instance, an adder can be sub- stantially influenced by the way it is connected to its environment. The interconnec- tion wires themselves contribute to delay as they introduce parasitic capacitances, resistances and even inductances. The impact of the interconnect parasitics is bound to increase in the years to come with the scaling of the technology. Scaling tends to emphasize some other deficiencies of the abstraction-based model. Some design entities tend to be global or external (to resort anew to the software analogy). Examples of global factors are the clock signals, used for synchronization in a digital design, and the supply lines. Increasing the size of a digital design has a chapter1.fm Page 18 Friday, January 18, 2002 8:58 AM 18 INTRODUCTION Chapter 1 profound effect on these global signals. For instance, connecting more cells to a sup- ply line can cause a voltage drop over the wire, which, in its turn, can slow down all the connected cells. Issues such as clock distribution, circuit synchronization, and supply-voltage distribution are becoming more and more critical. Coping with them requires a profound understanding of the intricacies of digital circuit design. Another impact of technology evolution is that new design issues and constraints tend to emerge over time. A typical example of this is the periodical reemergence of power dissipation as a constraining factor, as was already illustrated in the historical overview. Another example is the changing ratio between device and interconnect parasitics. To cope with these unforeseen factors, one must at least be able to model and analyze their impact, requiring once again a profound insight into circuit topol- ogy and behavior. Finally, when things can go wrong, they do. A fabricated circuit does not always exhibit the exact waveforms one might expect from advance simulations. Deviations can be caused by variations in the fabrication process parameters, or by the induc- tance of the package, or by a badly modeled clock signal. Troubleshooting a design requires circuit expertise. For all the above reasons, it is my belief that an in-depth knowledge of digital circuit design techniques and approaches is an essential asset for a digital-system designer. Even though she might not have to deal with the details of the circuit on a daily basis, the under- standing will help her to cope with unexpected circumstances and to determine the domi- nant effects when analyzing a design. Example 1.1 Clocks Defy Hierarchy To illustrate some of the issues raised above, let us examine the impact of deficiencies in one of the most important global signals in a design, the clock. The function of the clock signal in a digital design is to order the multitude of events happening in the circuit. This task can be compared to the function of a traffic light that determines which cars are allowed to move. It also makes sure that all operations are completed before the next one starts—a traffic light should be green long enough to allow a car or a pedestrian to cross the road. Under ideal cir- cumstances, the clock signal is a periodic step waveform with transitions synchronized throughout the designed circuit (Figure 1.7a). In light of our analogy, changes in the traffic lights should be synchronized to maximize throughput while avoiding accidents. The impor- tance of the clock alignment concept is illustrated with the example of two cascaded registers, both operating on the rising edge of the clock φ (Figure 1.7b). Under normal operating condi- tions, the input In gets sampled into the first register on the rising edge of φ and appears at the output exactly one clock period later. This is confirmed by the simulations shown in Figure 1.8c (signal Out). Due to delays associated with routing the clock wires, it may happen that the clocks become misaligned with respect to each other. As a result, the registers are interpreting time indicated by the clock signal differently. Consider the case that the clock signal for the second register is delayed—or skewed—by a value δ. The rising edge of the delayed clock φ′ will postpone the sampling of the input of the second register. If the time it takes to propagate the output of the first register to the input of the second is smaller than the clock delay, the latter will sample the wrong value. This causes the output to change prematurely, as clearly illus- trated in the simulation, where the signal Out′ goes high at the first rising edge of φ′ instead of chapter1.fm Page 19 Friday, January 18, 2002 8:58 AM Section 1.2 Issues in Digital Integrated Circuit Design 19 In φ (Volt) 3 2 φ φ′ t (nsec) Volt 1 (a) Ideal clock waveform In 0 skew REGISTER 3 φ 2 skew Out’ Out Volt 1 REGISTER φ′ 0 Out time (b) Two cascaded registers (c) Simulated waveforms Figure 1.7 Impact of clock misalignment. the second one. In terms of our traffic analogy, cars of a first traffic light hit the cars of the next light that have not left yet. Clock misalignment, or clock skew, as it is normally called, is an important example of how global signals may influence the functioning of a hierarchically designed system. Clock skew is actually one of the most critical design problems facing the designers of large, high- performance systems. Example 1.2 Power Distribution Networks Defy Hierarchy While the clock signal is one example of a global signal that crosses the chip hierarchy boundaries, the power distribution network represents another. A digital system requires a stable DC voltage to be supplied to the individual gates. To ensure proper operation, this voltage should be stable within a few hundred millivolts. The power distribution system has to provide this stable voltage in the presence of very large current variations. The resistive nature of the on-chip wires and the inductance of the IC package pins make this a difficult proposition. For example, the average DC current to be supplied to a 100 W-1V microprocessor equals 100 A! The peak current can easily be twice as large, and current demand can readily change from almost zero to this peak value over a short time—in the range of 1 nsec or less. This leads to a current variation of 100 GA/sec, which is a truly astounding number. Consider the problem of the resistance of power-distribution wires. A current of 1 A running through a wire with a resistance of 1 Ω causes a voltage drop of 1V. With supply voltages of modern digital circuits ranging between 1.2 and 2.5 V, such a drop is unaccept- chapter1.fm Page 20 Friday, January 18, 2002 8:58 AM 20 INTRODUCTION Chapter 1 Block A Block B Block A Block B (a) Routing through the block (b) Routing around the block Figure 1.8 Power distribution network design. able. Making the wires wider reduces the resistance, and hence the voltage drop. While this sizing of the power network is relatively simple in a flat design approach, it is a lot more complex in a hierarchical design. For example, consider the two blocks below in Figure 1.8a [Saleh01]. If power distribution for Block A is examined in isolation, the addi- tional loading due to the presence of Block B is not taken into account. If power is routed through Block A to Block B, a larger IR drop will occur in Block B since power is also being consumed by Block A before it reaches Block B. Since the total IR drop is based on the resistance seen from the pin to the block, one could route around the block and feed power to each block separately, as shown in Figure 1.8b. Ideally, the main trunks should be large enough to handle all the current flowing through separate branches. Although routing power this way is easier to control and main- tain, it also requires more area to implement. The large metal trunks of power have to be sized to handle all the current for each block. This requirement forces designers to set aside area for power busing that takes away from the available routing area. As more and more blocks are added, the complex interactions between the blocks determine the actual voltage drops. For instance, it is not always easy to determine which way the current will flow when multiple parallel paths are available between the power source and the consuming gate. Also, currents into the different modules do rarely peak at the same time. All these considerations make the design of the power-distribution a chal- lenging job. It requires a design methodology approach that supersedes the artificial boundaries imposed by hierarchical design. The purpose of this textbook is to provide a bridge between the abstract vision of digital design and the underlying digital circuit and its peculiarities. While starting from a solid understanding of the operation of electronic devices and an in-depth analysis of the nucleus of digital design—the inverter—we will gradually channel this knowledge into the design of more complex entities, such as complex gates, datapaths, registers, control- lers, and memories. The persistent quest for a designer when designing each of the men- tioned modules is to identify the dominant design parameters, to locate the section of the design he should focus his optimizations on, and to determine the specific properties that make the module under investigation (e.g., a memory) different from any others. chapter1.fm Page 21 Friday, January 18, 2002 8:58 AM Section 1.3 Quality Metrics of a Digital Design 21 The text also addresses other compelling (global) issues in modern digital circuit design such as power dissipation, interconnect, timing, and synchronization. 1.3 Quality Metrics of a Digital Design This section defines a set of basic properties of a digital design. These properties help to quantify the quality of a design from different perspectives: cost, functionality, robustness, performance, and energy consumption. Which one of these metrics is most important depends upon the application. For instance, pure speed is a crucial property in a compute server. On the other hand, energy consumption is a dominant metric for hand-held mobile applications such as cell phones. The introduced properties are relevant at all levels of the design hierarchy, be it system, chip, module, and gate. To ensure consistency in the defini- tions throughout the design hierarchy stack, we propose a bottom-up approach: we start with defining the basic quality metrics of a simple inverter, and gradually expand these to the more complex functions such as gate, module, and chip. 1.3.1 Cost of an Integrated Circuit The total cost of any product can be separated into two components: the recurring expenses or the variable cost, and the non-recurring expenses or the fixed cost. Fixed Cost The fixed cost is independent of the sales volume, the number of products sold. An impor- tant component of the fixed cost of an integrated circuit is the effort in time and man- power it takes to produce the design. This design cost is strongly influenced by the com- plexity of the design, the aggressiveness of the specifications, and the productivity of the designer. Advanced design methodologies that automate major parts of the design process can help to boost the latter. Bringing down the design cost in the presence of an ever- increasing IC complexity is one of the major challenges that is always facing the semicon- ductor industry. Additionally, one has to account for the indirect costs, the company overhead that cannot be billed directly to one product. It includes amongst others the company’s research and development (R&D), manufacturing equipment, marketing, sales, and build- ing infrastructure. Variable Cost This accounts for the cost that is directly attributable to a manufactured product, and is hence proportional to the product volume. Variable costs include the costs of the parts used in the product, assembly costs, and testing costs. The total cost of an integrated cir- cuit is now cost per IC = variable cost per IC + ----------------------- fixed cost (1.1) volume chapter1.fm Page 22 Friday, January 18, 2002 8:58 AM 22 INTRODUCTION Chapter 1 Individual die Figure 1.9 Finished wafer. Each square represents a die - in this case the AMD Duron™ microprocessor (Reprinted with permission from AMD). The impact of the fixed cost is more pronounced for small-volume products. This also explains why it makes sense to have large design team working for a number of years on a hugely successful product such as a microprocessor. While the cost of producing a single transistor has dropped exponentially over the past decades, the basic variable-cost equation has not changed: variable cost = cost of die + cost of die test + cost of packaging- ------------------------------------------------------------------------------------------------------------------ (1.2) final test yield As will be elaborated on in Chapter 2, the IC manufacturing process groups a number of identical circuits onto a single wafer (Figure 1.9). Upon completion of the fabrication, the wafer is chopped into dies, which are then individually packaged after being tested. We will focus on the cost of the dies in this discussion. The cost of packaging and test is the topic of later chapters. The die cost depends upon the number of good die on a wafer, and the percentage of those that are functional. The latter factor is called the die yield. cost of wafer cost of die = ------------------------------------------------------------- (1.3) dies per wafer × die yield The number of dies per wafer is, in essence, the area of the wafer divided by the die area.The actual situation is somewhat more complicated as wafers are round, and chips are square. Dies around the perimeter of the wafer are therefore lost. The size of the wafer has been steadily increasing over the years, yielding more dies per fabrication run. Eq. (1.3) also presents the first indication that the cost of a circuit is dependent upon the chip area—increasing the chip area simply means that less dies fit on a wafer. The actual relation between cost and area is more complex, and depends upon the die yield. Both the substrate material and the manufacturing process introduce faults that can cause a chip to fail. Assuming that the defects are randomly distributed over the wafer, and that the yield is inversely proportional to the complexity of the fabrication process, we obtain the following expression of the die yield: chapter1.fm Page 23 Friday, January 18, 2002 8:58 AM Section 1.3 Quality Metrics of a Digital Design 23 defects per unit area × die area – α die yield = 1 + ------------------------------------------------------------------------- (1.4) α α is a parameter that depends upon the complexity of the manufacturing process, and is roughly proportional to the number of masks. α = 3 is a good estimate for today’s complex CMOS processes. The defects per unit area is a measure of the material and process induced faults. A value between 0.5 and 1 defects/cm2 is typical these days, but depends strongly upon the maturity of the process. Example 1.3 Die Yield Assume a wafer size of 12 inch, a die size of 2.5 cm2, 1 defects/cm2, and α = 3. Determine the die yield of this CMOS process run. The number of dies per wafer can be estimated with the following expression, which takes into account the lost dies around the perimeter of the wafer. 2 dies per wafer = π × ( wafer diameter ⁄ 2 ) – π ----------------------------------------------------------- × wafer diameter --------------------------------------------- die area 2 × die area This means 252 (= 296 - 44) potentially operational dies for this particular example. The die yield can be computed with the aid of Eq. (1.4), and equals 16%! This means that on the aver- age only 40 of the dies will be fully functional. The bottom line is that the number of functional of dies per wafer, and hence the cost per die is a strong function of the die area. While the yield tends to be excellent for the smaller designs, it drops rapidly once a certain threshold is exceeded. Bearing in mind the equations derived above and the typical parameter values, we can conclude that die costs are proportional to the fourth power of the area: 4 cost of die = f ( die area ) (1.5) The area is a function that is directly controllable by the designer(s), and is the prime met- ric for cost. Small area is hence a desirable property for a digital gate. The smaller the gate, the higher the integration density and the smaller the die size. Smaller gates further- more tend to be faster and consume less energy, as the total gate capacitance—which is one of the dominant performance parameters—often scales with the area. The number of transistors in a gate is indicative for the expected implementation area. Other parameters may have an impact, though. For instance, a complex interconnect pattern between the transistors can cause the wiring area to dominate. The gate complex- ity, as expressed by the number of transistors and the regularity of the interconnect struc- ture, also has an impact on the design cost. Complex structures are harder to implement and tend to take more of the designers valuable time. Simplicity and regularity is a pre- cious property in cost-sensitive designs. 1.3.2 Functionality and Robustness A prime requirement for a digital circuit is, obviously, that it performs the function it is designed for. The measured behavior of a manufactured circuit normally deviates from the chapter1.fm Page 24 Friday, January 18, 2002 8:58 AM 24 INTRODUCTION Chapter 1 expected response. One reason for this aberration are the variations in the manufacturing process. The dimensions, threshold voltages, and currents of an MOS transistor vary between runs or even on a single wafer or die. The electrical behavior of a circuit can be profoundly affected by those variations. The presence of disturbing noise sources on or off the chip is another source of deviations in circuit response. The word noise in the context of digital circuits means “unwanted variations of voltages and currents at the logic nodes.” Noise signals can enter a circuit in many ways. Some examples of digital noise sources are depicted in Figure 1.10. For instance, two wires placed side by side in an inte- grated circuit form a coupling capacitor and a mutual inductance. Hence, a voltage or cur- rent change on one of the wires can influence the signals on the neighboring wire. Noise on the power and ground rails of a gate also influences the signal levels in the gate. Most noise in a digital system is internally generated, and the noise value is propor- tional to the signal swing. Capacitive and inductive cross talk, and the internally-generated power supply noise are examples of such. Other noise sources such as input power supply noise are external to the system, and their value is not related to the signal levels. For these sources, the noise level is directly expressed in Volt or Ampere. Noise sources that are a function of the signal level are better expressed as a fraction or percentage of the signal level. Noise is a major concern in the engineering of digital circuits. How to cope with all these disturbances is one of the main challenges in the design of high-performance digital circuits and is a recurring topic in this book. VDD v(t) i(t) (a) Inductive coupling (b) Capacitive coupling (c) Power and ground noise Figure 1.10 Noise sources in digital circuits. The steady-state parameters (also called the static behavior) of a gate measure how robust the circuit is with respect to both variations in the manufacturing process and noise disturbances. The definition and derivation of these parameters requires a prior under- standing of how digital signals are represented in the world of electronic circuits. Digital circuits (DC) perform operations on logical (or Boolean) variables. A logical variable x can only assume two discrete values: x ∈ {0,1} As an example, the inversion (i.e., the function that an inverter performs) implements the following compositional relationship between two Boolean variables x and y: y = x: {x = 0 ⇒ y = 1; x = 1 ⇒ y = 0} (1.6) chapter1.fm Page 25 Friday, January 18, 2002 8:58 AM Section 1.3 Quality Metrics of a Digital Design 25 A logical variable is, however, a mathematical abstraction. In a physical implemen- tation, such a variable is represented by an electrical quantity. This is most often a node voltage that is not discrete but can adopt a continuous range of values. This electrical volt- age is turned into a discrete variable by associating a nominal voltage level with each logic state: 1 ⇔ VOH, 0 ⇔ VOL, where VOH and VOL represent the high and the low logic levels, respectively. Applying VOH to the input of an inverter yields VOL at the output and vice versa. The difference between the two is called the logic or signal swing Vsw. V OH = ( V OL ) (1.7) V OL = ( V OH ) The Voltage-Transfer Characteristic Assume now that a logical variable in serves as the input to an inverting gate that produces the variable out. The electrical function of a gate is best expressed by its voltage-transfer characteristic (VTC) (sometimes called the DC transfer characteristic), which plots the output voltage as a function of the input voltage Vout = f(Vin). An example of an inverter VTC is shown in Figure 1.11. The high and low nominal voltages, VOH and VOL, can readily be identified—VOH = f(VOL) and VOL = f(VOH). Another point of interest of the VTC is the gate or switching threshold voltage VM (not to be confused with the threshold voltage of a transistor), that is defined as VM = f(VM). VM can also be found graphically at the intersection of the VTC curve and the line given by Vout = Vin. The gate threshold volt- age presents the midpoint of the switching characteristics, which is obtained when the out- put of a gate is short-circuited to the input. This point will prove to be of particular interest when studying circuits with feedback (also called sequential circuits). Vout VOH f Vout = Vin V M VOL Figure 1.11 Inverter voltage-transfer characteristic. VOL VOH Vin Even if an ideal nominal value is applied at the input of a gate, the output signal often deviates from the expected nominal value. These deviations can be caused by noise or by the loading on the output of the gate (i.e., by the number of gates connected to the output signal). Figure 1.12a illustrates how a logic level is represented in reality by a range of acceptable voltages, separated by a region of uncertainty, rather than by nominal levels chapter1.fm Page 26 Friday, January 18, 2002 8:58 AM 26 INTRODUCTION Chapter 1 alone. The regions of acceptable high and low voltages are delimited by the VIH and VIL voltage levels, respectively. These represent by definition the points where the gain (= dVout / dVin) of the VTC equals −1 as shown in Figure 1.12b. The region between VIH and VIL is called the undefined region (sometimes also referred to as transition width, or TW). Steady-state signals should avoid this region if proper circuit operation is to be ensured. Noise Margins For a gate to be robust and insensitive to noise disturbances, it is essential that the “0” and “1” intervals be as large as possible. A measure of the sensitivity of a gate to noise is given by the noise margins NML (noise margin low) and NMH (noise margin high), which quan- tize the size of the legal “0” and “1”, respectively, and set a fixed maximum threshold on the noise value: NM L = V IL – V OL (1.8) NM H = V OH – V IH The noise margins represent the levels of noise that can be sustained when gates are cas- caded as illustrated in Figure 1.13. It is obvious that the margins should be larger than 0 for a digital circuit to be functional and by preference should be as large as possible. Regenerative Property A large noise margin is a desirable, but not sufficient requirement. Assume that a signal is disturbed by noise and differs from the nominal voltage levels. As long as the signal is within the noise margins, the following gate continues to function correctly, although its output voltage varies from the nominal one. This deviation is added to the noise injected at the output node and passed to the next gate. The effect of different noise sources may accumulate and eventually force a signal level into the undefined region. This, fortunately, does not happen if the gate possesses the regenerative property, which ensures that a dis- Vout “1” VOH Slope = -1 VOH VIH Undefined Region Slope = -1 VIL VOL “0” VOL VIL VIH Vin (a) Relationship between voltage and logic levels (b) Definition of VIH and VIL Figure 1.12 Mapping logic levels to the voltage domain. chapter1.fm Page 27 Friday, January 18, 2002 8:58 AM Section 1.3 Quality Metrics of a Digital Design 27 “1” VOH NMH VIH Undefined region NML VIL VOL “0” Gate output Gate input Figure 1.13 Cascaded inverter gates: Stage M Stage M + 1 definition of noise margins. turbed signal gradually converges back to one of the nominal voltage levels after passing through a number of logical stages. This property can be understood as follows: An input voltage vin (vin ∈ “0”) is applied to a chain of N inverters (Figure 1.14a). Assuming that the number of inverters in the chain is even, the output voltage vout (N → ∞) will equal VOL if and only if the inverter possesses the regenerative property. Similarly, when an input voltage vin (vin ∈ “1”) is applied to the inverter chain, the output voltage will approach the nominal value VOH. … v0 v1 v2 v3 v4 v5 v6 (a) A chain of inverters 5 v0 3 V (Volt) (b) Simulated response of chain of MOS inverters 1 v1 v2 –1 2 0 4 6 8 10 t (nsec) Figure 1.14 The regenerative property. Example 1.4 Regenerative property The concept of regeneration is illustrated in Figure 1.14b, which plots the simulated transient response of a chain of CMOS inverters. The input signal to the chain is a step-waveform with chapter1.fm Page 28 Friday, January 18, 2002 8:58 AM 28 INTRODUCTION Chapter 1 a degraded amplitude, which could be caused by noise. Instead of swinging from rail to rail, v0 only extends between 2.1 and 2.9 V. From the simulation, it can be observed that this devi- ation rapidly disappears, while progressing through the chain; v1, for instance, extends from 0.6 V to 4.45 V. Even further, v2 already swings between the nominal VOL and VOH. The inverter used in this example clearly possesses the regenerative property. The conditions under which a gate is regenerative can be intuitively derived by ana- lyzing a simple case study. Figure 1.15(a) plots the VTC of an inverter Vout = f(Vin) as well as its inverse function finv(), which reverts the function of the x- and y-axis and is defined as follows: in = f ( out ) ⇒ in = finv ( out ) (1.9) out out v3 f(v) finv(v) v1 v1 finv(v) v3 f(v) v2 v0 in v0 v2 in (a) Regenerative gate (b) Nonregenerative gate Figure 1.15 Conditions for regeneration. Assume that a voltage v0, deviating from the nominal voltages, is applied to the first inverter in the chain. The output voltage of this inverter equals v1 = f(v0) and is applied to the next inverter. Graphically this corresponds to v1 = finv(v2). The signal voltage gradu- ally converges to the nominal signal after a number of inverter stages, as indicated by the arrows. In Figure 1.15(b) the signal does not converge to any of the nominal voltage levels but to an intermediate voltage level. Hence, the characteristic is nonregenerative. The dif- ference between the two cases is due to the gain characteristics of the gates. To be regener- ative, the VTC should have a transient region (or undefined region) with a gain greater than 1 in absolute value, bordered by the two legal zones, where the gain should be smaller than 1. Such a gate has two stable operating points. This clarifies the definition of the VIH and the VIL levels that form the boundaries between the legal and the transient zones. Noise Immunity While the noise margin is a meaningful means for measuring the robustness of a circuit against noise, it is not sufficient. It expresses the capability of a circuit to “overpower” a chapter1.fm Page 29 Friday, January 18, 2002 8:58 AM Section 1.3 Quality Metrics of a Digital Design 29 noise source. Noise immunity, on the other hand, expresses the ability of the system to pro- cess and transmit information correctly in the presence of noise [Dally98]. Many digital circuits with low noise margins have very good noise immunity because they reject a noise source rather than overpower it. These circuits have the property that only a small fraction of a potentially-damaging noise source is coupled to the important circuit nodes. More precisely, the transfer function between noise source and signal node is far smaller than 1. Circuits that do not posses this property are susceptible to noise. To study the noise immunity of a gate, we have to construct a noise budget that allo- cates the power budget to the various noise sources. As discussed earlier, the noise sources can be divided into sources that are proportional to the signal swing Vsw. The impact on the signal node is expressed as g Vsw. fixed. The impact on the signal node equals f VNf, with Vnf the amplitude of the noise source, and f the transfer function from noise to signal node. We assume, for the sake of simplicity, that the noise margin equals half the signal swing (for both H and L). To operate correctly, the noise margin has to be larger than the sum of the coupled noise values. ∑f V ∑g V V sw -≥ V NM = ------- i Nfi + j sw (1.10) 2 i j Given a set of noise sources, we can derive the minimum signal swing necessary for the system to be operational, 2 ∑f V i Nfi V sw ≥ ------------------------- i (1.11) 1 – 2 gj ∑ j This makes it clear that the signal swing (and the noise margin) has to be large enough to overpower the impact of the fixed sources (f VNf). On the other hand, the sensitivity to internal sources depends primarily upon the noise suppressing capabilities of the gate, this is the proportionality or gain factors gj. In the presence of large gain factors, increasing the signal swing does not do any good to suppress noise, as the noise increases proportionally. In later chapters, we will discuss some differential logic families that suppress most of the internal noise, and hence can get away with very small noise margins and signal swings. Directivity The directivity property requires a gate to be unidirectional, that is, changes in an output level should not appear at any unchanging input of the same circuit. If not, an output-sig- nal transition reflects to the gate inputs as a noise signal, affecting the signal integrity. In real gate implementations, full directivity can never be achieved. Some feedback of changes in output levels to the inputs cannot be avoided. Capacitive coupling between inputs and outputs is a typical example of such a feedback. It is important to minimize these changes so that they do not affect the logic levels of the input signals. chapter1.fm Page 30 Friday, January 18, 2002 8:58 AM 30 INTRODUCTION Chapter 1 Fan-In and Fan-Out The fan-out denotes the number of load gates N that are connected to the output of the driving gate (Figure 1.16). Increasing the fan-out of a gate can affect its logic output lev- els. From the world of analog amplifiers, we know that this effect is minimized by making the input resistance of the load gates as large as possible (minimizing the input currents) and by keeping the output resistance of the driving gate small (reducing the effects of load currents on the output voltage). When the fan-out is large, the added load can deteriorate the dynamic performance of the driving gate. For these reasons, many generic and library components define a maximum fan-out to guarantee that the static and dynamic perfor- mance of the element meet specification. The fan-in of a gate is defined as the number of inputs to the gate (Figure 1.16b). Gates with large fan-in tend to be more complex, which often results in inferior static and dynamic properties. M N (b) Fan-in M Figure 1.16 Definition of fan-out and fan- (a) Fan-out N in of a digital gate. The Ideal Digital Gate Based on the above observations, we can define the ideal digital gate from a static per- spective. The ideal inverter model is important because it gives us a metric by which we can judge the quality of actual implementations. Its VTC is shown in Figure 1.17 and has the following properties: infinite gain in the transition region, and gate threshold located in the middle of the logic swing, with high and low noise margins equal to half the swing. The input and output impedances of the ideal gate are infinity and zero, respectively (i.e., the gate has unlimited fan-out). While this ideal VTC is unfortunately impossible in real designs, some implementations, such as the static CMOS inverter, come close. Example 1.5 Voltage-Transfer Characteristic Figure 1.18 shows an example of a voltage-transfer characteristic of an actual, but outdated gate structure (as produced by SPICE in the DC analysis mode). The values of the dc-param- eters are derived from inspection of the graph. chapter1.fm Page 31 Friday, January 18, 2002 8:58 AM Section 1.3 Quality Metrics of a Digital Design 31 Vout g = -∞ Vin Figure 1.17 Ideal voltage-transfer characteristic. VOH = 3.5 V; VOL = 0.45 V VIH = 2.35 V; VIL = 0.66 V VM = 1.64 V NMH = 1.15 V; NML = 0.21 V The observed transfer characteristic, obviously, is far from ideal: it is asymmetrical, has a very low value for NML, and the voltage swing of 3.05 V is substantially below the max- imum obtainable value of 5 V (which is the value of the supply voltage for this design). 5.0 4.0 NML 3.0 Vout (V) 2.0 VM NMH 1.0 Figure 1.18 Voltage-transfer 0.0 1.0 2.0 3.0 4.0 5.0 characteristic of an NMOS Vin (V) inverter of the 1970s. 1.3.3 Performance From a system designers perspective, the performance of a digital circuit expresses the computational load that the circuit can manage. For instance, a microprocessor is often characterized by the number of instructions it can execute per second. This performance chapter1.fm Page 32 Friday, January 18, 2002 8:58 AM 32 INTRODUCTION Chapter 1 metric depends both on the architecture of the processor—for instance, the number of instructions it can execute in parallel—, and the actual design of logic circuitry. While the former is crucially important, it is not the focus of this text book. We refer the reader to the many excellent books on this topic [for instance, Hennessy96]. When focusing on the pure design, performance is most often expressed by the duration of the clock period (clock cycle time), or its rate (clock frequency). The minimum value of the clock period for a given technology and design is set by a number of factors such as the time it takes for the signals to propagate through the logic, the time it takes to get the data in and out of the registers, and the uncertainty of the clock arrival times. Each of these topics will be dis- cussed in detail on the course of this text book. At the core of the whole performance anal- ysis, however, lays the performance of an individual gate. The propagation delay tp of a gate defines how quickly it responds to a change at its input(s). It expresses the delay experienced by a signal when passing through a gate. It is measured between the 50% transition points of the input and output waveforms, as shown in Figure 1.19 for an inverting gate.2 Because a gate displays different response times for rising or falling input waveforms, two definitions of the propagation delay are necessary. The tpLH defines the response time of the gate for a low to high (or positive) output transi- tion, while tpHL refers to a high to low (or negative) transition. The propagation delay tp is defined as the average of the two. t pLH + t pHL t p = ------------------------- - (1.12) 2 Vin 50% t tpHL tpLH Vout 90% 50% 10% t tf Figure 1.19 Definition of propagation tr delays and rise and fall times. 2 The 50% definition is inspired the assumption that the switching threshold VM is typically located in the middle of the logic swing. chapter1.fm Page 33 Friday, January 18, 2002 8:58 AM Section 1.3 Quality Metrics of a Digital Design 33 CAUTION: : Observe that the propagation delay tp, in contrast to tpLH and tpHL, is an artificial gate quality metric, and has no physical meaning per se. It is mostly used to com- pare different semiconductor technologies, or logic design styles. The propagation delay is not only a function of the circuit technology and topology, but depends upon other factors as well. Most importantly, the delay is a function of the slopes of the input and output signals of the gate. To quantify these properties, we intro- duce the rise and fall times tr and tf , which are metrics that apply to individual signal waveforms rather than gates (Figure 1.19), and express how fast a signal transits between the different levels. The uncertainty over when a transition actually starts or ends is avoided by defining the rise and fall times between the 10% and 90% points of the wave- forms, as shown in the Figure. The rise/fall time of a signal is largely determined by the strength of the driving gate, and the load presented by the node itself, which sums the con- tributions of the connecting gates (fan-out) and the wiring parasitics. When comparing the performance of gates implemented in different technologies or circuit styles, it is important not to confuse the picture by including parameters such as load factors, fan-in and fan-out. A uniform way of measuring the tp of a gate, so that tech- nologies can be judged on an equal footing, is desirable. The de-facto standard circuit for delay measurement is the ring oscillator, which consists of an odd number of inverters connected in a circular chain (Figure 1.20). Due to the odd number of inversions, this cir- cuit does not have a stable operating point and oscillates. The period T of the oscillation is determined by the propagation time of a signal transition through the complete chain, or T = 2 × tp × N with N the number of inverters in the chain. The factor 2 results from the observation that a full cycle requires both a low-to-high and a high-to-low transition. Note that this equation is only valid for 2Ntp >> tf + tr. If this condition is not met, the circuit might not oscillate—one “wave” of signals propagating through the ring will overlap with a successor and eventually dampen the oscillation. Typically, a ring oscillator needs a least five stages to be operational. v0 v1 v2 v3 v4 v5 v0 v1 v5 Figure 1.20 Ring oscillator circuit for propagation-delay measurement. chapter1.fm Page 34 Friday, January 18, 2002 8:58 AM 34 INTRODUCTION Chapter 1 CAUTION: We must be extremely careful with results obtained from ring oscillator measurements. A tp of 20 psec by no means implies that a circuit built with those gates will operate at 50 GHz. The oscillator results are primarily useful for quantifying the dif- ferences between various manufacturing technologies and gate topologies. The oscillator is an idealized circuit where each gate has a fan-in and fan-out of exactly one and parasitic loads are minimal. In more realistic digital circuits, fan-ins and fan-outs are higher, and interconnect delays are non-negligible. The gate functionality is also substantially more complex than a simple invert operation. As a result, the achievable clock frequency on average is 50 to a 100 times slower than the frequency predicted from ring oscillator mea- surements. This is an average observation; carefully optimized designs might approach the ideal frequency more closely. Example 1.6 Propagation Delay of First-Order RC Network Digital circuits are often modeled as first-order RC networks of the type shown in Figure 1.21