Document Details

PraiseworthyBlackberryBush7916

Uploaded by PraiseworthyBlackberryBush7916

The University of Sheffield

2013

Luke Seed

Tags

VLSI technology integrated circuits electronics semiconductors

Summary

This document provides an overview of trends in VLSI (very-large-scale integration) technology. It explores exponential improvements in functionality and cost, considering the limitations and hurdles as technology continues shrinking, focusing on the role of transistors, transistors and memory. The document examines intrinsic limits and future projections. It includes figures and formulas.

Full Transcript

September 2013 /nls/Integrated Electronics/Trends in VLSI Trends in VLSI Introduction The trends in VLSI design and technology are predicated, at this point in time, on Moore’s1 Law and this can be stated informally as:...

September 2013 /nls/Integrated Electronics/Trends in VLSI Trends in VLSI Introduction The trends in VLSI design and technology are predicated, at this point in time, on Moore’s1 Law and this can be stated informally as: “The functionality of devices doubles every 18 months” or “the cost of the same functionality halves every 18 months” What this points to is an exponential improvement in what can be achieved using VLSI over time. Whilst this law has been more or less true over the last 30 years, this improvement cannot go on forever and a time will be reached when this rapid improvement will cease – to be replaced by more modest incremental gains. It will be at this point that VLSI can be considered to be a mature industry. The exact point in time when this point will be reached is not certain and, indeed, academics and industry experts alike have been predicting the imminent end of Moore’s Law almost since it was originally stated. Various insurmountable technological hurdles have been cited as the cause for the end – most notably photo-lithographical limitiations. However, these hurdles have all been surmounted or avoided altogether by shifts in technology. Unfortunately, the time is coming and with technology now sub 20nm, the end is relatively close. Technology Shrinking The exponential improvements in VLSI have generally been fuelled by shrinking the size of technology. Consider the plan view of the Field Effect Transistor (FET) in Figure 1. λ Figure 1: FET Plan View The FET consists of areas of polysilicon (non-crystalline silicon), and metal (usually Aluminium but, increasingly, Copper) deposited onto silicon in which defined areas have been doped with n an p implants. These materials are separated by barriers formed from Silicon Dioxide, Polyimide, etc. During manufacture, the various areas are defined using a number of photo-lithographic masks and the size of the technology is usually defined by the minimum achievable dimension, λ. In most cases, this will be the length of the FET’s channel (the width of the gate), as shown. This will often be referred to as the half-pitch. It assumes a set of wires, of this width separated by a gap of the same size. Therefore, λ is the half-pitch of the wires. Generally, as λ scales by 1/s, other dimensions scale too (although not linearly in many cases). This scaling has a number of desirable effects: the 1 Gordon Moore, founder of Intel Corp. Copyright © Luke Seed, September 2013 page 1 September 2013 /nls/Integrated Electronics/Trends in VLSI area of the FET scales as s-2; the maximum frequency of operation of the FET scales (nearly as s); parasitic capacitances scale as s-1; (all other things being equal - which is usually not the case), whilst the resistance of wires scales as s. Note that RC will scale as s.s-1 =1 (independent of scale). The net effect of this, as s increases, is that the number of devices that can be placed on a chip of a particular size increases as s2 (increasing the available functionality drastically) or the same number of devices can fit into a corresponding smaller area (with a concomitant effect of cost of manufacture). Furthermore, the frequency at which the scaled devices can operate increases and this provides additional functionality per unit time. Clearly, there are disadvantages to scaling. Firstly, not all things scale readily or may already be reaching their technological limit; secondly, the cost of scaling the technology and validating it for mass production is almost prohibitive; and finally, the cost of manufacturing an IC escalates also and this can alter the overall economics of producing a device. Intrinsic Limits However, as s increases, there are intrinsic limits, below which it is difficult to go or at which the behaviour of the devices changes. It is at these limits that it is feared that the exponential improvement of VLSI technology as we know it will end. When the size of the technology gets below 10nm the behaviour of the FET changes. Consider a FET with a channel length and width of 10nm: Gate Oxide To be relatively consistent with the existing technology scaling rules (so called ‘constant field scaling’) the layer of Silicon Dioxide that separates the polysilicon gate from the silicon channel will be 1nm thick – that is 3 atoms thick! This is the first problem (although researchers are looking materials with much higher values of εr to allow them to make the oxide layer thicker but keep the capacitance value high). A very simple model for the capacitance of the gate-oxide-silicon capacitor is: εo.εr.L.W/t = 8.854x10-12x3.9x10-8.10-8/10-9= 3.5aF. The field-strength at which Si02 breaks down is 500MV/m and so the maximum voltage that could be imposed on the gate would be 0.5V. Consequently, the charge on the gate of this FET would be = 1.4x10-18 coulombs and this is equivalent to 9 electrons! When a barrier is less than a few nm, the probability of electron tunnelling becomes significant and this could mean that neither the cut-off channel nor the gate oxide represent any sort of impediment to conduction. 1nm is felt to be the limit for the SiO2 gate insulator before this conduction reaches unmanageable levels (100A.cm-2). There is ongoing research to identify alternative materials with higher relative permittivity allowing thicker insulation to be used – indeed, the change to different materials forming the gate began at 45nm. However, as permittivity increases, the barrier that the material represents to tunnelling of carriers (electrons mainly) tends to decrease and so the choice of material is a trade-off. Additionally, the formation of an SiO2 insulator is done by thermal oxidation of the Si. This is an extremely reliable process and results in very few surface states (these are traps at the boundary between the Si and the Copyright © Luke Seed, September 2013 page 2 September 2013 /nls/Integrated Electronics/Trends in VLSI SiO2 that tend to reduce the mobility of carriers in the channel). There are few materials that can be grown so easily and that do not affect carrier mobility. Indeed, most of the materials under consideration react considerably with Si and have a significant effect on performance. Consequently, SiO2 is still used as a buffer layer between the Si and the insulator and this tends to mitigate the usefulness of the insulator. Gate Electrode Additional problems arising from the use of alternative gate insulators relate to using these insulators with the Polysilicon gate electrode. The same problems that occur with the underlying crystalline Si also apply to the Polysilicon gate electrode. Heavily-doped Polysilicon is used for the electrode because it can be n or p doped (to control the threshold voltage for pMOSFETs and nMOSFETs) and it is heavily doped to reduce its resistance. Using alternative insulators requires a move to metal electrodes and introduces the need for separate metals for pMOSFETs and nMOSFETs. This significantly complicates the fabrication process. Lithography The ability to define features, reliably, depends upon electromagnetic radiation with a wavelength similar to or greater than that of the minimum feature (to prevent diffraction), some sort of mask with the necessary resolution, resist materials that have the appropriate chemistry and can be imaged at this resolution, the ability to combine processing steps with the necessary repeatability, and a manufacturing process that is tolerant to the sequence of steps applied. Currently, lithographic exposure tools (steppers) use light with a wavelength of 193nm (via ArF excimer lasers producing power densities of circa 200W/cm2), which undergoes a de-magnification of circa 4. The critical dimension at which a stepper can image features is: λ CD = k NA where k is a process-related parameter, λ is the wavelength of the light, and NA is numerical aperture (nsinθ - where n is the refractive index of the medium through which the light passes and θ is the half-angle through which light from the light passes from the final lens to its focus). Clearly, improving the resolution depends on changing these parameters. There have been moves to reduce λ but attempts have been hampered by the difficulty of finding light sources of shorter wavelength with sufficient power. Attempts to reduce λ to 153nm by using other Fluorine based lasers were aborted and efforts are now directed towards EUV sources at 13.5nm (success here would circumvent this problem almost completely). However, the powers at these wavelengths are still far to low to produce commercial steppers (a factor of 10 away) and the costs of EUV steppers already delivered (for research purposes) is immense – over $100M per machine. The light, which is essentially on the edge of X-Ray, is produced by evaporating droplets of molten Tin. Unfortuntely, all of the optics, thereafter need to be reflective and must be done in a vaccum and issues relating to condensation of Tin on these optics abound. Other approaches depend on increasing NA. Unfortunately, the NA of the optics is a typical stepper is circa 0.94 – very close to its limit in air. However, it was recognised that by creating a bubble of water between the optics and the wafer, the NA can be Copyright © Luke Seed, September 2013 page 3 September 2013 /nls/Integrated Electronics/Trends in VLSI increased (n of water is 1.44 at 193nm). The value of k depends on the demagnification and other issues like diffraction and this can be controlled by exploiting the coherence of the light source and essentially creating ‘support features’ that act as holographic lenses offsetting effect of the diffraction. An example is shown in Figure 2. Feature on mask Feature on IC Figure 2: Serifs on Masks The features created on the mask, which are required to be re-created faithfully when exposed on to the surface of the wafer in the photoresist, are distorted by the effects of diffraction when the size of the features shrinks towards (an beyond) the wavelength. However, by using smaller, support features the effect of the distortion can be offset producing features on the wafer that are closer to those desired. Similarly, looking at the formation of lines (i.e. tracks) smaller fringes are placed around the lines, as shown in Figure 3 – these act as Fresnel-like lenses that focus the coherent light to produce more tightly focussed lines Figure 3: Optical Proximity Correction (OPC) Beyond this, other techniques like double-patterning are used where the wafer is exposed using two masks each of which carry half the features with greater separation. However, this increases costs – two masks are used rather than one – and creates even more stringent accuracy requirements on the masks. Consequently, double-patterning is only used on critical layers. It is worth bearing in mind that masks are extremely expensive Copyright © Luke Seed, September 2013 page 4 September 2013 /nls/Integrated Electronics/Trends in VLSI because the small critical features and serifs require the use of Electron Beam exposure tools to write the masks and a set of masks for an advanced process can cost in excess of $2M. Adding more masks is not desirable. Furthermore, a stepper (which can cost in excess of $50M) has a throughput of circa 100 wafers per hour and double-patterning will impact on cost of producing ICs. Increasing the number of exposures required for each layer will reduce the wafers that can be processed per hour – making them more expensive. Doping The Si must be doped with various group III (p) and group V (n) elements to make semiconductors. The behaviour of any device is critically dependent upon this being done accurately. That is the overall concentration, distribution of the dopants, and the doping gradients. Doping is often done by implantation whereby Si is bombarded by the high- energy dopant ions. These ions literally smash through the Si crystal lattice and become embedded at a depth that is related to their energy/mass, etc. Unfortunately, they do significant damage to the Si crystalline structure and this requires thermal annealing to repair but this annealing process also allows the dopants to diffuse through the Si upsetting the doping profile. The density of Si atoms in crystalline Si is 5x1026 atoms/m3 and the total number of atoms in the 10nm cubic volume (10-24m3) formed by the area of the channel to a depth of 10nm would be 500. The maximum level to which Si can be doped by impurities (the solubility limit) before the chemistry becomes more complex is ~1026/m3 this would mean only 100 dopant atoms in the channel. The distribution of these dopants is critical to control the threshold voltage of the FET, for example. Each of these is a serious problem in its own right and the likelihood is that making FETs that work and can be made reliably with a minimum dimension below about 10nm is not feasible (if not impossible). It is worth bearing in mind that current technology is circa 14-22nm (2013) and each technological advance gets harder and harder. Copyright © Luke Seed, September 2013 page 5 September 2013 /nls/Integrated Electronics/Trends in VLSI A Historical Perspective The evolution of VLSI really began with the discovery of the bipolar transistor in 1947 by Bardeen, Brattain, and Schockley at Bell Labs in the US (although the Field Effect Transistor – Triode – was invented at the turn of the 20th century). Following the observation of the transistor effect, the initial development of transistors was relatively slow – relying on the development of the manufacturing processes from scratch as shown in Table 1: Date Development 1947 Bipolar Transistor Effect 1950 Single Crystal Germanium 1951 Junction FET 1952 Single Crystal Silicon 1954 Commercial Silicon Transistor 1958 First IC (phase-shift oscillator) at Texas Instruments (Jack Kilby) 1960 Metal-Oxide-Semiconductor (MOS) FET 1961 Commercial IC (Fairchild – Resistor Transistor Logic) 1962 MOS IC 1963 Complementary MOS (CMOS) 1964 First Linear IC 1968 First MOS memory chip 1971 First Microprocessor (Intel 4004) Table 1: Early Development of the Transistor and ICs It is clear that the development from the first observation of the effect to the real beginning of the digital age took about 24 years. The first commercial microprocessor, the Intel 4004, shown in Figure 4 had the following characteristics: # Transistors 2300 Technology 10µm, NMOS clock speed 740KHz. Figure 4: Intel 4004 Copyright © Luke Seed, September 2013 page 6 September 2013 /nls/Integrated Electronics/Trends in VLSI The Development of Microprocessors The development thereafter has been equally rapid. The characteristics of various Intel microprocessors over the years from 1971 to 2002[4,5] can be seen in Figure 5 and Figure 6. The trend line for the various specific processors shown is a doubling of number of transistors every two years and a doubling of clock frequencies every circa 3 years. In terms of scaling, a doubling of transistors (for a similar sized device) means that s = 1.2/year (if area scales as s-2 per year then this will be s-4 over two year and if you can produce 2x the number of transistors every two years then s-4 = 0.5 and so s = 1.2/year). On this basis, if speed, scales with s then over 3 years, the speed should scale as s3. For the clock frequency to double over 3 years would require s = 1.25 (this is not exactly the same but not too far out either). However, it is worthwhile noting that this is not what is possible, merely what was marketed by Intel. Figure 5: Intel Processor Circuit Densities Copyright © Luke Seed, September 2013 page 7 September 2013 /nls/Integrated Electronics/Trends in VLSI Figure 6: Intel Microprocessor Clock Frequencies Additionally, it is worthwhile remembering that the ability and desire on the part of a manufacturer to deliver a particular IC at any point in time is a combination of issues: Technology and effects of migrating to better technologies Design Cost Profitability Marketing and competition Risk All of these factors tend to obscure the progression of the underlying technology somewhat. Memory Along with microprocessors, memory forms the backbone of the computing industry. However, whilst the progression of Microprocessors is predicated on performance, the technological progression of memory is predicated on storage density. That is to say, the design of a memory is such that density is increased at the expense of speed. This is not done to extremes because, the performance of a computer system is determined by the raw processing power of the microprocessor and the speed at which data and instructions can be delivered from memory to the microprocessor and, whilst microprocessor system developers have developed a number of ways of coping with slow memory (e.g. cache memory, synchronous memories, wide buses) there is a basic requirement for the gap between processor and memory speeds not to grow too large. The development of memories can be traced in much the same way as microprocessors across the last 30 years (and here we concentrate on one kind of memory technology – Dynamic RAM) as shown in Figure 7. Memory Size 1000 100 Megabits / device 10 1 0.1 0.01 1975 1980 1985 1990 1995 2000 2005 Year Figure 7: MOS Memory Sizes Copyright © Luke Seed, September 2013 page 8 September 2013 /nls/Integrated Electronics/Trends in VLSI Figure 7 shows the number of bits that can be packed into a memory device and the trend line shows a doubling of memory capacity every 18 months. The speed of available DRAMs is shown in Figure 8 and here, although the progression is still exponential, the speed of available memory only doubles every 8 years. Memory Cycle Times 1000 100 ns 10 1 1975 1980 1985 1990 1995 2000 2005 Year Figure 8: MOS Memory Speed Clearly, you may question these figures: don’t DRAMs run a lot faster than this? The true answer is no: Synchronous DRAMs wrap DRAM arrays in a synchronous wrapper where commands such as OPEN PAGE, READ, CLOSE PAGE are issued. Once a page is opened, mutliple values can be read (in bursts) at very high speed. However, the basic access time (in truth, cycle time) is the time taken to open a page, read a value, and then close a page, ready to open a new page. This is what contributes to the large values in Figure 8. Additionally, SDRAMs contain multiple DRAM arrays that can be operated in parallel. Consequently, the latency associated with opening a page can be hidden whilst data from a page derived from another DRAM array is being accessed. Copyright © Luke Seed, September 2013 page 9 September 2013 /nls/Integrated Electronics/Trends in VLSI The Future With such a steady and reliable progression of technology, crystal-ball gazing should be straightforward. However, technology shifts are not smooth and there are many factors that give rise to sudden changes or ‘flat-spots’. Things such as, for example, the change from Aluminium to Copper interconnects can give rise to a sudden increase in operating frequencies (because Cu is twice as conductive as Al and, hence, the resistance of interconnects will be lower – with the same parasitic resistances this should give rise to smaller delays iff interconnect delay is a limiting factor). There is a ‘roadmap’ published on a biennial basis by an organisation made up from Semiconductor manufacturers: The International Technology Roadmap for Semiconductors. The roadmap makes a range of forecasts for various aspects of semiconductor technology over the near and long term. Additionally, it tries to identify what the industry perceives the major challenges to be in the coming years. The following projections in Figure 9…Figure 12 (drawn from the 2001 ITRS) try to encapsulate the information that has already been presented. Note that DRAM densities (Gbits.cm-2) are presented to present the information in a more uniform way (introduction of new generations of DRAMs at a lower rate than the shrinking of technology leads to a curve resembling a staircase). It is worthwhile noting that, especially in the long term, these predictions assume that solutions to fundamental problems will be solved. Size of Technology 1000 µm 100 10 2000 2005 2010 2015 2020 Year Figure 9: Projected Shrink of Technology Transistors per Microprocessor 10000 Transistors (Millions) 1000 100 10 2000 2005 2010 2015 2020 Year Figure 10: Projected Numbers of Transistor in High-End Microprocessors Copyright © Luke Seed, September 2013 page 10 September 2013 /nls/Integrated Electronics/Trends in VLSI Microprocessor On-Chip Clock Frequency 100 Frequency (GHz) 10 1 2000 2005 2010 2015 2020 Year Figure 11: Projected On-Chip Clock Frequencies DRAM Memory Density 100 10 -2 Gbits.cm 1 0.1 2000 2005 2010 2015 2020 Year Figure 12: Projected Transistor Densities for DRAM As this document is being updated (2013), we can look at the predictions from 2001 and see how accurate they are. Figure 9 details the shinking in technology and the prediction is that in 2013, the technology will be at 22nm – pretty much where we are. Figure 10 indicates that the a high-end processor should have circa 1.5B transistors: the Intel Core7 processor from 2012 has 1.4B transistors. Figure 11 identifies that clock frequencies should be circa 20GHz – this prediction is wrong: the industry found it to be too difficult to scale processor frequencies beyond 4GHz. Figure 12 indicates that a DRAM should pack 12Gbits of memory into 1cm2. In fact, a leading-edge device packs in circa 4.5Gbits/cm2. Beyond these trends, there are a number of things that will change, or become increasingly important in the coming years. New processes and materials will be introduced or become mainstream. For example, Cu has been introduced as a replacement for Al (despite real problems such as the migration of Cu into Si at the boundary) to reduce interconnect resistance. Additionally, different dielectrics with lower εr are being sought to reduce interconnect capacitance. Conversely, Copyright © Luke Seed, September 2013 page 11 September 2013 /nls/Integrated Electronics/Trends in VLSI materials with high values of εr are being investigated to keep gate thicknesses to manageable values (this change came in at circa 45nm). The use of Silicon-on-Insulator technologies is becoming more mainstream because of their superior characteristics (in many respects) and reduced parasitics (although there is still a cost difference). Intel introduced Fin FETs at 22nm (to produce devices with characteristics to offset the problems associated with shrinking technologies) and are contemplating 14nm technology. Currently, the wafers used for the production of ICs are 200mm or 300mm (most manufacturers have moved to 300mm wafers). Manufacturers are currently contemplating a shift to 450mm wafers but this, of course, depends upon a wide range of technology problems being solved and the huge capital investment being made to gear-up for the new wafer size. Like everything else, the cost of IC Fabrication facilities has been rising exponentially (Rock’s Law – Intel – says that semiconductor manufacturing equipment that dominates the cost of any Fab will double every 4 years). Consequently, whereas in 1970, the cost of a Fab was $10M ($90M in today’s money), the cost was around the $2-3B2 mark in the early 2000s and is now as much as $10B! As long as the industry is fuelled by an exponential growth in revenues, the payback period for such investments will continue to be the same. However, any levelling out of demand will severely affect the industry and prevent it from migrating to better technologies. As technology has shrunk, mainstream, major IC manufacturers have gradually fallen by the wayside – unable or unwilling to contemplate the costs associated with staying on the roadmap. Consequently, there are few manufacturers left who can make ICs at the leading edge e.g. Intel, Samsung, TSMC, Global Foundries, IBM. References 1. Cramming More Circuits into Chip. Electronics, Vol. 19, 1965. pp114-117. 2. Material and Process Limits in Silicon VLSI Technology. Plummer J D, Griffin P B. Proceedings of the IEEE (Special Issue on Limits of Semiconductor Technology), Vol. 89, No. 3, 2001. 3. Integrated Circuit Engineering. Glaser, Subak-Sharpe. Addison Wesley, 1979. ISBN 0 201 7427 3 4. Digital Integrated Circuits – A Design Perspective. Rabaey. Prentice Hall, 1996. ISBN 0 13 394271 6 5. http://www.intel.com 6. Available from http://en.wikipedia.org/wiki/File:Transistor_Count_and_Moore%27s_Law_- _2008.svg under CCA licence. 7. International Technology Roadmap for Semiconductors – http://public.itrs.net 2 $1B = $1000M Copyright © Luke Seed, September 2013 page 12

Use Quizgecko on...
Browser
Browser