Digital Imaging and Image Preprocessing Lectures PDF

Document Details

ThankfulChrysanthemum

Uploaded by ThankfulChrysanthemum

LUT University, LAB University of Applied Sciences

Henri Petrow

Tags

digital imaging image processing electromagnetic radiation light

Summary

These lecture notes cover the fundamentals of light interaction with matter and imaging sensors, focusing on electromagnetic radiation. The notes explain the wave nature of light, reflection, and different light sources. It is a lecture series on digital imaging.

Full Transcript

3.9.2024 BM40A1201 DIGITAL IMAGING AND IMAGE PREPROCESSING Light Interaction with Matter and Imaging Sensors Lecture 1: Light Henri Petrow 2 ELECTROMAGNETIC RADIATION What is electromagnetic radiation? What is electromagnetic radiat...

3.9.2024 BM40A1201 DIGITAL IMAGING AND IMAGE PREPROCESSING Light Interaction with Matter and Imaging Sensors Lecture 1: Light Henri Petrow 2 ELECTROMAGNETIC RADIATION What is electromagnetic radiation? What is electromagnetic radiation? WHAT IS ELECTROMAGNETIC RADIATION ? Electromagnetic radiation (EMR) is composed of self-propagating electromagnetic waves, it does not need any medium for propagation. Travels in vacuum or space at the speed of light, 300.000 km/s. Can be analyzed and treated both by utilizing wave theory and particle properties, depending on the physics approach required. When treating EMR as waves, it is composed of self propagating, perpendicular electric (E) and magnetic (B) field components. When treating EMR as particles, photons, we apply quantum physics theory to these quanta carrying quantized EMR energy as they propagate. 4 What is electromagnetic radiation? SOUND WAVES ARE NOT ELECTROMAGNETIC Sound waves are mechanical waves causing pressure and displacement changes in a medium, typically air or water. Completely different physics apply to sound waves. Sound waves need a medium, they do not exist in vacuum or space. A source of sound waves. 5 What is electromagnetic radiation? SOURCES OF ELECTROMAGNETIC RADIATION EMR is generated in nature by stars and atom nucleus decay but today to a large part also by human activities. According to classical physics, electromagnetic radiation is generated by accelerating or decelerating charged particles, most typically electrons Sun produces a wide range of EMR. Lightbulbs, fluorescent lamps, LEDs, telecommunication devices, medical devices and many everyday household Sun photo by NASA. Cell phone. A radiation source equipment are other modern sources of EMR. powerhouse. This course concentrates on the visible region of light and digital imaging utilizing this part of EMR spectrum. 6 What is electromagnetic radiation? SOURCES OF ELECTROMAGNETIC RADIATION Some other everyday examples of EMR emitters. Medical computed Halogen lamp. tomography (CT) x-ray TV remote control. imaging machine. Microwave oven. Infrared household heater. Telecommunication base station. 7 TWO NATURES OF LIGHT The wave nature of electromagnetic radiation The wave nature of electromagnetic radiation ELECTROMAGNETIC WAVE EMR treated as perpendicular, time varying electric (E) and magnetic (B) fields. Simplest wave is sinusoidal wave with: 𝐸𝑥 = 𝐸0cos(𝜔𝑡 − 𝑘𝑧 − 𝜙0). k is propagation constant, An electromagnetic wave is a travelling wave which has time varying electric and magnetic fields. The fields are 2𝜋 perpendicular to each other and the direction of 𝑘= propagation, k. 𝜆 𝜆 is wavelength, 𝜔 is angular frequency and 𝜙0 is phase of the wave when 𝑡=0 and 𝑧=0 9 The wave nature of electromagnetic radiation PHASE VELOCITY In formula 𝐸𝑥=𝐸0cos(𝜔𝑡−𝑘𝑧−𝜙0) argument 𝜔𝑡−𝑘𝑧−𝜙0 is called the phase of the wave, 𝜙. Phase velocity v is the speed at which a point of constant phase 𝜙, (for example the maximum An electromagnetic wave is a travelling wave which has time varying electric and magnetic fields. The fields are amplitude point) travels along z-axis perpendicular to each other and the direction of propagation, k. 𝜔 𝑣= = 𝜈𝜆 𝑘 10 The wave nature of electromagnetic radiation PHASE VELOCITY Phase velocity v is the speed at which the maximum amplitude point travels along z axis 𝜔 𝑣 = = 𝜈𝜆 𝑘 In any optical medium with a refractive index on An electromagnetic wave is a travelling wave which has 𝑛, phase velocity v is time varying electric and magnetic fields. The fields are perpendicular to each other and the direction of 𝑐 propagation, k. 𝑣= 𝑛 Where c is the speed of light. 11 The wave nature of electromagnetic radiation WAVE PACKETS If we now, consider two waves at angular frequencies of 𝜔+𝛿𝜔 and 𝜔−𝛿𝜔 which interfere, they generate a wave packet containing field oscillations at mean frequency 𝜔, but the packet´s amplitude is modulated at frequency 𝛿𝜔 Two slightly different wavelength waves travelling in the The envelope wave and its maximum amplitude same direction result in a wave packet that has an amplitude variation which travels at the group velocity. now travel at a speed called group velocity, vg. 12 The wave nature of electromagnetic radiation GROUP VELOCITY AND GROUP INDEX Group velocity vg in a medium is defined as 𝑐 𝑣𝑔 = 𝑁𝑔 Where c is the speed of light and Ng is the group index of the medium. Group index Ng is defined d𝑛 𝑁𝑔 = 𝑛 − 𝜆 d𝜆 Two slightly different wavelength waves travelling in the Which is a function of wavelength (n is also a same direction result in a wave packet that has an function of wavelength) amplitude variation which travels at the group velocity. 13 The wave nature of electromagnetic radiation PLANE WAVES 𝐸𝑥 = 𝐸0cos(𝜔𝑡 − 𝑘𝑧 − 𝜙0) describes a monochromatic plane wave which is constant in any plane perpendicular to axis z. This plane of constant E (since z and t are constants) is called a wavefront. A plane EM wave travelling along z, has the same Ex (or By) at any point in a given xy-plane. All electric field vectors in a given xy-plane are therefore in phase. The xy-planes are of infinite extent in the x and y directions. 14 The wave nature of electromagnetic radiation THE DIVERGENCE OF EMR Very far away from the EMR source, EMR waves maybe treated as nearly perfect plane waves, see figure (a). Real life EMR sources are often closer and beam diverging must be taken into an account. Figure (b) presents an EMR point source which Examples of possible EM waves produces a spherical wavefront. Many real life EMR/light sources behave as in (c), wave fronts are bent, and this must be accounted for. 15 The wave nature of electromagnetic radiation REFLECTION OF LIGHT EMR/light is treated according to wave theory when analyzing phenomena such as interference and diffraction. Snell’s law relates propagation angles and refractive indices of two media when light travels through a boundary of two optically different media. sin 𝜃𝑖 𝑛2 = sin 𝜃𝑡 𝑛1 𝜃𝑖 and 𝜃𝑡 are incident and transmitted wave angles, 𝑛1 and 𝑛2 are the refractive indices of medium 1 and 2. light wave travelling in a medium with a greater refractive index (n1 > n2) suffers reflection and refraction at the boundary. 16 The wave nature of electromagnetic radiation TOTAL INTERNAL REFLECTION If 𝑛1 > 𝑛2, and 𝜃𝑖 is large enough, total internal reflection (TIR) occurs when 𝜃𝑡= 90° Critical incidence angle for TIR is defined 𝑛2 sin 𝜃𝑐 = 𝑛1 Light wave travelling in a denser medium strikes a less dense medium. Depending on the incidence angle with respect to 𝜃c, which is determined by the ratio of the refractive indices, the wave may be transmitted (refracted) or reflected. (a) 𝜃i < 𝜃c (b) 𝜃i = 𝜃c (c) 𝜃i > 𝜃c and total internal reflection (TIR). 17 The wave nature of electromagnetic radiation REFLECTION OF LIGHT (a) 𝜃i < 𝜃c then some of the wave is transmitted into the less dense medium. Some of the wave is reflected. (b) 𝜃i > 𝜃c then the incident wave suffers total internal reflection. However, there is an evanescent wave at the surface of the medium. Light wave travelling in a denser medium strikes a less dense medium. Depending on the incidence angle with respect to 𝜃c, which is determined by the ratio of the refractive indices, the wave may be transmitted (refracted) or reflected. (a) 𝜃i < 𝜃c (b) 𝜃i = 𝜃c (c) 𝜃i > 𝜃c and total internal reflection (TIR). 18 The wave nature of electromagnetic radiation REFLECTION OF LIGHT In the case of total internal reflection, reflected wave has same amplitude as the initial wave, but phase is shifted. If we divide the incident and reflected E field waves into perpendicular and parallel components (see drawing), following formulas maybe used to calculate phase changes: 1 Light wave travelling in a denser medium strikes a less dense medium. Depending on the incidence angle with respect to 𝜃c, which is 1 ⊥ sin2 𝜃𝑖 − 2 𝑛 2 determined by the ratio of the refractive indices, the wave may be tan 𝜙 = 2 cos 𝜃𝑖 transmitted (refracted) or reflected. (a) 𝜃i < 𝜃c (b) 𝜃i = 𝜃c (c) 𝜃i > 𝜃c and total internal reflection (TIR). for the perpendicular component 19 The wave nature of electromagnetic radiation REFLECTION OF LIGHT And … 1 1 1 2 sin 𝜃𝑖 − 𝑛 2 2 tan 𝜙|| + 𝜋 = 2 2 𝑛2 cos 𝜃𝑖 for the parallel component. Light wave travelling in a denser medium strikes a less dense medium. Depending on the incidence angle with respect to 𝜃c, which is determined by the ratio of the refractive indices, the wave may be transmitted (refracted) or reflected. (a) 𝜃i < 𝜃c (b) 𝜃i = 𝜃c (c) 𝜃i > 𝜃c and total internal reflection (TIR). 20 The wave nature of electromagnetic radiation REFLECTION COEFFICIENTS AND REFLECTANCE Fresnel´s equations define reflection and transmission coefficients for the amplitudes of the perpendicular and parallel electric field components. In the special case of normal incidence (𝜃𝑖= 90°), reflection coefficients of the two components are equal and they can be calculated from 𝑛1 − 𝑛2 𝑟|| = 𝑟⊥ = Light wave travelling in a denser medium strikes a less dense medium. 𝑛1 + 𝑛2 Depending on the incidence angle with respect to 𝜃c, which is determined by the ratio of the refractive indices, the wave may be Reflectances 𝑅|| and 𝑅┴ are defined as transmitted (refracted) or reflected. (a) 𝜃i < 𝜃c (b) 𝜃i = 𝜃c (c) 𝜃i > 𝜃c and total internal reflection (TIR). 𝑅|| = |𝑟|| |2 and 𝑅⊥ = |𝑟⊥ |2 21 TWO NATURES OF LIGHT The particle nature of electromagnetic radiation The particle nature of electromagnetic radiation PHOTONS In particle treatment, EMR/light energy is carried by EMR quanta, called photons. Photon concept was initially developed by Albert Einstein to explain observations that did not fit the classical wave model The energy of a photon depends only on its frequency (𝜐 or often also f) and hence inversely, its wavelength (λ), c is speed of light, h is Planck's constant: ℎ𝑐 𝐸 = ℎ𝜈 = 𝜆 Note: letter 𝜐 is Greek ”upsilon” and denotes frequency, v is normal Latin letter ”vee”. Atomic photon emission as explained by Niels Bohr in 1913. 23 The particle nature of electromagnetic radiation PHOTON INTERACTION WITH MATTER The energy of a photon depends only on its frequency (𝜐) and inversely wavelength (λ): ℎ𝑐 𝐸 = ℎ𝜈 = 𝜆 Particle treatment of light is especially practical in explaining the interaction of light photons with semiconductors, such as silicon photodiodes or modern imaging sensors. Semiconductor sensor operation is conveniently modeled Interaction of photons in a silicon photodiode, details to by photon penetration and absorption characteristics in be presented later. semiconductors. This is a topic we will focus in the next lecture. 24 The particle nature of electromagnetic radiation INTERACTION OF VISIBLE PHOTONS WITH MATTER For the purposes of this course, we concentrate on the interaction of light waves and light photons with materials typically used on digital imaging systems and sensors. Pure metals are typically highly conductive and reflective materials and appear opaque to visible light even in thin layers. Metal crystals contain vast concentrations of free electrons, which results in that all impinging light photons are absorbed while new, reflected photons are emitted with only a small loss of light energy and intensity. Detailed quantum mechanical explanation is beyond our A mirror built from a thin layer of silver focus here. will reflect nearly all light and a mirror image is formed. 25 The particle nature of electromagnetic radiation INTERACTION OF VISIBLE PHOTONS WITH MATTER Some insulator materials, like glass, appear transparent to light since there are no free electrons and the energy band gap is too large for the light photons to generate electron-hole pairs and be absorbed.. In these insulator materials, visible photons traverse nearly unaffected, and the material appears transparent. Commercially used semiconductors, like silicon, are materials which conduct some electric current, and under certain conditions provide excellent environment for detection and measurement of visible light photons. Nearly all modern imaging sensors are manufactured on Light detecting silicon photodiode silicon substrates. appears nearly black in color since it absorbs efficiently visible photons 26 THE SPECTRUM The spectrum of electromagnetic radiation The spectrum of electromagnetic radiation REGIONS OF ELECTROMAGNETIC SPECTRUM EMR is divided by wavelength range into radio, microwave, infrared, visible, ultraviolet, X-ray, and gamma ray radiation. This course concentrates on the VISIBLE region of light and the imaging utilizing this part of EMR spectrum. Low frequency means longer wavelength, high frequency means shorter wavelength. EMR penetration and range in a medium depends on EMR wavelength and absorption mechanisms in the medium. Photons of different energies Electromagnetic spectrum. interact differently with materials. 28 The spectrum of electromagnetic radiation SPECTRUM OF VISIBLE LIGHT Visible spectrum covers just a tiny part of the whole EMR range. Wave lengths from roughly 400 nm to 700 nm are visible to the human eye, all other EMR ranges are invisible but may sometimes be sensed due to their heating effect (UV, IR, microwaves). Visible wavelengths are sensed as different colors by the human eye, 400 nm light is seen as blue and 700 nm as red. An object seems e.g., green to human eye Electromagnetic spectrum. because it reflects green wavelengths and absorbs all other wavelengths. 29 The spectrum of electromagnetic radiation SPECTRUM OF VISIBLE LIGHT The graph presents division of the visible spectrum into violet, blue, green, yellow, orange and red regions, as sensed by the human eye. In wavelengths longer than 750 nm, photons do not have enough energy to trigger the sensation of vision in human retina, whereas UV photons in range < 380nm are mostly absorbed even before they reach retina. In addition, UV photons reaching retina are harmful, eye protection is always required when working with sources of UV light. 30 Visible spectrum. The spectrum of electromagnetic radiation ELECTROMAGNETIC RADIATION AND THE ATMOSPHERE Earth’s atmosphere efficiently blocks some ranges of EMR from entering or exiting earths surface. Visible light and most radio waves can pass the atmosphere to both directions. Visible spectrum. 31 The spectrum of electromagnetic radiation SOLAR RADIATION SPECTRUM The emission spectrum of sun light at sea level on earth is presented below (red curve). Emission in visible range is quite uniform, sun appears almost as a “white” source of light. Large proportion of solar radiation is in IR region and warms earth’s surface. Drops in spectrum are caused by absorption in atmospheric molecules, mainly O2, O3 (ozone) Solar spectrum. and H2O. 32 The spectrum of electromagnetic radiation TRADITIONAL LIGHT SOURCES Incandescent lamp (old fashioned filament lamp) emits lot of IR radiation (heat) outside the visible region. In an incandescent lamp 90% of energy turns into heat. A halogen lamp is more efficient but still emits a lot of photons above 700 nm. Fluorescent lamp is clearly the most efficient of these three, conventional lamp technologies. Lamp spectra 33 The spectrum of electromagnetic radiation LIGHT EMITTING DIODES Light emitting diodes, LEDs, have inherently high efficiency but their use in lighting and most imaging applications has been limited by the narrow emission bandwidth. Conventional LEDs emit only one color, and for emission of white light special technology must be applied. Currently, the dominating method, utilized for general lighting purpose LEDs, uses a blue Indium-Gallium-Nitride (InGaN) LED with a phosphor coating to generate white light, a method somewhat similar to the one used in fluorescent lamps. Spectrum of a white LED showing blue light directly emitted by the GaN-based LED (peak at about 465 nm) and the more broadband Stokes-shifted light emitted by the Ce3+:YAG phosphor, which emits at roughly 500–700 nm. 34 Definitions and units relating to visible light BANDWIDTH Bandwidth means width of the optical spectrum of the output of some light source, e.g., a LED light source or a filter attached to the light source. Bandwidth is also referred to as FWHM (full-width-half-maximum) meaning the width of the spectrum at an intensity level which is half of the central value (often also maximum value). Note that bandwidth maybe equally used to describe the sensitivity range of a light detector or an imaging sensor. In photonics, bandwidth is typically expressed in units of wavelength, typically nanometers, but in electronics bandwidth usually refers to frequency domain and Optical filter transmittance bandwidth. bandwidth is expressed in Hz. 35 Definitions and units relating to visible light LUMINOUS INTENSITY, LUMINOUS FLUX AND ILLUMINANCE Luminous intensity means the power emitted by a light source in a particular direction in a unit solid angle, weighted by the luminosity function. Luminosity function is a standardized model describing sensitivity of the human eye to different wavelengths. Unit of luminous intensity is candela (cd). One candle emits roughly one candela of light power. Luminous flux means the total power of light emitted by a source to all directions, again weighted by the luminosity function. Unit of luminous flux is lumen (lm). 1 lm = 1cdx 1sr A candle emits roughly one cd of Illuminance = luminous flux per m2. Typically describes light. how much light is arriving on a surface. Unit is lux (lx). 1 lx = 1 lm/m2. 36 Definitions and units relating to visible light RADIANT INTENSITY AND IRRADIANCE Radiant intensity means the total power emitted by a light source in a particular direction in a unit solid angle. Radiant intensity is independent of wavelength and human eye response. Unit of radiant intensity is W/sr. Irradiance means the total power of light is arriving on a surface, independent of wavelength and human eye response. Unit is W/m2. Annual total, accumulated irradiance of solar power. 37 SUMMARY Summary WHAT DID WE LEARN TODAY? Electromagnetic radiation (EMR) is composed of self- propagating, perpendicular, time varying electric (E) and magnetic (B) fields. The energy of a photon depends only on its frequency (ν) and An electromagnetic wave is a travelling wave which has inversely wavelength (λ): time varying electric and magnetic fields. The fields are perpendicular to each other and the direction of propagation, k. Visible spectrum covers just a tiny part of the whole EMR range. Metals mostly reflect visible light, some insulators like glass are transparent to visible photons, semiconducting silicon is the basic material of nearly all modern imaging sensors. Electromagnetic spectrum. 39 Summary NEXT WEEK Introduction to silicon as an imaging sensor substrate material Silicon p- and n-type doping Silicon pn-diode and a photodiode (PD) Key silicon photodiode characteristics 40 10.9.2024 BM40A1201 DIGITAL IMAGING AND IMAGE PREPROCESSING Light Interaction with Matter and Imaging Sensors Lecture 2: PN-junctions and Photodiodes Henri Petrow 2 RECAP Metals mostly reflect visible light, some insulators like glass are transparent to visible photons. Commercially used semiconductors, like silicon, are materials which conduct some electric current, and under certain conditions provide excellent environment for detection and measurement of visible light photons. Nearly all modern imaging sensors are Light detecting silicon photodiode appears nearly black in color since it manufactured on silicon substrates. absorbs efficiently visible photons at all wavelengths. 3 SEMICONDUCTORS Introduction to silicon and it’s properties Introduction to silicon and it’s properties INTRODUCTIONTO SILICON AND IT’SPROPERTIES Silicon, chemical element with symbol Si, atomic number 14. Name comes from Latin word “silex”, meaning a hard stone. Silicon is widely found in dusts and sands, as silicon dioxide or as silicates. Silicon is the second most abundant element in the Earth's crust after oxygen. Synthetic polymers called silicones (polysiloxanes) are based on silicon. A silicon wafer Silicon is used in steel and chemical industries in large volumes, and in its highly purified form in integrated circuits and sensors. 5 Introduction to silicon and it’s properties ENERGY BANDS Atoms have different energy levels where electrons are allowed to stay. When atoms are brought close to each other in high numbers to produce a material, the energy levels start to overlap, and they form energy bands. In a metal the various energy bands overlap to give a single band of energies that is only partially full of electrons. There According to the composition of the atoms, are states with energies up to the vacuum level where the electron is free. these bands can have electrons, or they can be empty. 6 Introduction to silicon and it’s properties ENERGY BANDS In metal the various energy bands overlap. There are states with energies up to vacuum level where the electron is free. Metal is a good electrical conductor. In a metal the various energy bands overlap to give a single band of energies that is only partially full of electrons. There are states with energies up to the vacuum level where the electron is free. 7 Introduction to silicon and it’s properties ENERGY BANDS In semiconductors, such as silicon the electron energy is divided into two distinctive energy bands called valence band (VB) and conduction band (CB). Gap between the energy bands is called bandgap Eg, corresponding to forbidden electron energies in the crystal. Valence electrons (tied in covalent bonds) are in valence band. Free electrons are in conduction band. (a) A simplified two-dimensional view of a region of the Si crystal showing covalent bonds. At 0K temperature there are no free electrons. (b) The energy band diagram of electrons in the Si crystal at absolute zero of temperature. 8 Introduction to silicon and it’s properties ENERGY BANDS Electrons will become free electrons, if an external excitation energy source enables the electron to absorb enough additional energy to reach the conduction band. The minimum additional energy required is Eg. If a light photon with an energy ℎ𝑐 𝐸 = ℎ𝜐 = ≥ 𝐸𝑔 , 𝜆 interacts with an electron in VB, a free electron is generated Thermal energy will also create electron-hole pairs at all (a) A photon with an energy greater than Eg can excite an electron from the VB to the CB. temperatures > 0K, this is called thermal generation. (b) Each line between Si-Si atoms is a valence electron in a bond. When a photon breaks a Si-Si bond, a free electron and a hole in the Si-Si bond is created. 9 Introduction to silicon and it’s properties P-TYPE DOPING Properties of silicon can be modified using doping. Using Boron B for doping results p-type material due to lack of a valence electron for bonding. Acceptor levels in valence band accept electrons and therefore create holes. (a) Boron doped Si crystal. B has only three valence electrons. When it substitutes for a Si atom one of its bonds has an electron missing and therefore a hole. (b) Energy band diagram for a p-type Si doped with 1 ppm B. There are acceptor energy levels just above Ev around B– sites. These acceptor levels accept electrons from the VB and therefore create holes in the VB. 10 Introduction to silicon and it’s properties N-TYPE DOPING The four valence electrons of Arsenic As allow to bond just like Si but the fifth is left free. Material is n-type. (a) The four valence electrons of As allow it to bond just like Si but the fifth electron is left orbiting the As site. The energy required to release to free fifth electron into the CB is very small. (b) Energy band diagram for an n-type Si doped with 1 ppm As. There are donor energy levels just below Ec around As+ sites. 11 Introduction to silicon and it’s properties ENERGY BANDS OF DOPED SILICON In intrinsic silicon there are as many holes in valence band as electrons in conduction band (0 at 0K). More electrons in n-type. More holes in p-type. Energy band diagrams for (a) intrinsic (b) n-type and (c) p- type semiconductors. In all cases, np = ni2. Note that donor and acceptor energy levels are not shown. 12 Introduction to silicon and it’s properties ENERGY BANDS OF DOPED SILICON Due to doping silicon is a reasonable conductor. Energy diagram tilts due to electrostatic potential. Energy band diagram of an n-type semiconductor connected to a voltage supply of V volts. The whole energy diagram tilts because the electron now has an electrostatic potential energy as well. 13 PN-JUNCTION Introduction to pn-junction and its properties Introduction to pn-junction and its properties PN-DIODE By manufacturing p- and n-type regions into same silicon crystals we can build pn- junctions with a built-in electric field and potential difference. Diffusion and drift balances. Wn eNd = - Wp eNa E0 = Wn eNd /ε = - Wp eNa /ε V0 = - E0 (Wn - Wp )/2 Properties of the pn-junction 15 Introduction to pn-junction and its properties FORWARD BIASED PN-JUNCTION Injection of minority carriers. Applied voltage V lowers the potential energy and reduces the SCL width W. Forward biased pn-junction and the injection of minority carriers (a) Carrier concentration profiles across the device under forward bias. (b) The hole potential energy with and without applied bias. W is the width of the SCL with forward bias. 16 Introduction to pn-junction and its properties REVERSE BIASED PN-JUNCTION Reverse voltage widens the space charge region i.e., increases W. Reverse current is due to thermally generated electron- hole pairs EHP. Reverse biased pn-junction. (a) Minority carrier profiles and the origin of the reverse current. (b) Hole PE across the junction under reverse bias. 17 Introduction to pn-junction and its properties REVERSE I-V CHARACTERISTICS OF A PN-JUNCTION Thicker space charge region increases the reverse current. Reverse I-V characteristics. 18 SILICON PHOTODIODE Silicon photodiode key characteristics Silicon photodiode key characteristics PN-DIODE LIGHT DETECTION PROPERTIES (a) A schematic diagram of a Within the pn-junction, we have a depletion layer reverse biased pn- where there are nearly no free charge carriers, and junction photodiode. (b) Net space charge across this is where the electric field locates. the diode in the depletion region. Nd and Na are the If a light photon with an energy above the bandgap donor and acceptor interacts with an electron in the valence band of the concentrations in the p- and n-sides. silicon crystal, an electron-hole pair is generated in (c) The field in the depletion the depletion region. region. Electrons and holes move different directions in the depletion region and generate electric current -> we have a device, a photodiode which is converting photon light energy into electric current! 20 Silicon photodiode key characteristics SILICON PHOTODIODE BASIC PROPERTIES Silicon photodiode is like a normal diode, it conducts current easily only in one direction. P-type area forms the positive electrode, anode of the photodiode. Often p-type area is manufactured onto an n-type silicon substrate. The substrate or the bulk of the silicon Interaction of photons in a silicon photodiode. photodiode chip is often of n-type silicon and hence forms the negative electrode, cathode of the component. 21 Silicon photodiode key characteristics SILICON PHOTODIODE BASIC PROPERTIES Short λ means higher photon energy, photons are more likely to have energy > Eg -> photons are more likely to be absorbed. Long λ means lower photon energy, photons are less likely to have energy > Eg -> photons penetrate deeper into silicon crystal. Depletion layer in real photodiodes is usually Interaction of photons in a silicon photodiode. very thin, in the range 5-30 μm, that is 5-30 x 10-6 m. 22 Silicon photodiode key characteristics SILICON PHOTODIODE BASIC PROPERTIES A silicon PD has good response when the photon energy is greater than Eg, and photons can penetrate the depletion region of the PD. If Ephoton is very high (short wavelength in UV region) the photons are absorbed before they reach PD depletion area -> no signal. If Ephoton is very low (long wavelength in IR region) the photons have too low energy, and they pass through the depletion area without Interaction of photons in a silicon photodiode interaction -> no signal. 23 Silicon photodiode key characteristics SILICON PHOTODIODE AND MOS KEY CHARACTERISTICS Silicon photodiodes (PDs) and light sensitive CMOS cells are the basic building blocks of all digital imaging sensors. In the following we will discuss the key characteristics of PDs also effecting overall performance of such imaging sensors. Although electrical operating principle is different, many of the optical photodiode characteristics may be equally defined for a A consumer camera image sensor. light sensitive CMOS structure. 24 Silicon photodiode key characteristics PHOTODIODE CURRENT As mentioned above, PDs are like other diodes, in principle they conduct current only in one direction. In darkness, the current-voltage curve (I-V curve) of a PD is similar to a normal diode, curve ①. When PD is exposed to light, the response curve shifts when current is generated by the Typical photodiode I-V curve. light photons, even without external voltage across the PD, curves ② and ③. 25 Silicon photodiode key characteristics PHOTODIODE CURRENT If we assume PD anode and cathode are connected and we monitor the current Isc (see drawing) we see a current which is extremely linear. Linearity over a very wide range of illumination is one of the excellent properties of silicon PDs. See attached, typical curve for a high-quality photodiode, response is linear over nearly 5 orders of magnitude of illumination, sometimes even 9 orders of magnitude. Typical photodiode I-V curve and short circuit current. 26 Silicon photodiode key characteristics RESPONSIVITY A typical, spectral response curve of a PD is presented in the attached figure. Responsivity R defines how much signal current the PD generates at a given wavelength, for one watt of light power. In an ideal photodiode, every light photon would generate one electron-hole pair. One watt of light power means one J of energy per s, and one ampere of signal current means one C of charge per s. Therefore, in an ideal photodiode 𝑒 𝑒 𝑅= = 𝜆, ℎ𝜐 ℎ𝑐 which is a straight line in the R graph. Responsivity (R) vs. wavelength ( λ) for an ideal photodiode with QE = 100% (η = 1) and for a typical commercial Si photodiode. 27 Silicon photodiode key characteristics QUANTUM EFFICIENCY In real life PDs, the efficiency of converting light quanta to electron-hole pairs is not 100%. Quantum efficiency (QE) 𝜂 is defined 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑒𝑙𝑒𝑐𝑡𝑟𝑜𝑛 − ℎ𝑜𝑙𝑒 𝑝𝑎𝑖𝑟𝑠 𝑐𝑜𝑙𝑙𝑒𝑐𝑡𝑒𝑑 𝜂= 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑖𝑛𝑐𝑖𝑑𝑒𝑛𝑡 𝑝ℎ𝑜𝑡𝑜𝑛𝑠 Quantum efficiency 𝜂 may reach a value of 90- 95% in a high-quality PD in the 700-900nm wavelength range. Therefore, in a real-life photodiode 𝑒 𝑒 𝑅=𝜂 =𝜂 𝜆 ℎ𝜐 ℎ𝑐 Responsivity (R) vs. wavelength ( λ) for an ideal photodiode with QE = 100% (η = 1) and for a typical commercial Si photodiode. 28 Silicon photodiode key characteristics DARK CURRENT AND SHUNT RESISTANCE Dark current ID of a photodiode is the current flowing through the diode either at a very small voltage across the PD (some millivolts) or at a given reverse bias voltage. Typically, dark current ID is defined inversely as shunt resistance Rsh calculated from Id at 10mV reverse bias voltage. ID of a small photodiode is typically in units of picoamps (10-12 A) or femtoamps (10-15 A). Typical photodiode dark current curve. 29 Silicon photodiode key characteristics REVERSE DIODE CURRENT IN A GE PN-JUNCTION Reverse current is decreased when temperature is decreased. Factor 1e-5 for decrease of temperature of 75 degrees. Reverse diode current in a Ge pn-junction as a function of temperature in a ln(Irev) vs. 1/T plot. Above 238 K, Irev is controlled by ni2 and below 238 K it is controlled by ni. The vertical axis is a logarithmic scale with actual current values. 30 Silicon photodiode key characteristics PHOTODIODE CAPACITANCE Photodiode capacitance Cd is often dominated (a) A schematic diagram of a reverse biased by the capacitance of the pn junction Cj, and pn-junction we can define photodiode. (b) Net space charge 𝜀𝐴 across the diode in the 𝐶𝑑 ≈ 𝐶𝑗 = depletion region. Nd 𝑑 and Na are the donor Where ε is the permittivity of silicon, 𝐴 is area of and acceptor concentrations in the the photodiode and 𝑑 is the depth of the depletion p- and n-sides. layer. (c) The field in the depletion region If 𝐴 ↑ then Capacitance ↑ → larger diode has larger capacitance. If 𝑑 ↑ then Capacitance ↓ → diode with larger depletion has lower capacitance. 31 Silicon photodiode key characteristics SILICON PHOTODIODE EQUIVALENT CIRCUIT In a circuit analysis situation, a PD can be conveniently described by using an equivalent circuit with following components Ideal current source IL describing light generated current An ideal diode describing the diode properties Cj presenting diode capacitance Rsh presenting dark current Rs series resistance corresponding to any series resistance of the pn structure or the Photodiode equivalent circuit. interconnections to the PD 32 Silicon photodiode key characteristics SILICON PHOTODIODE AND NOISE The smallest light signal that a photodiode can detect is typically determined by the electronic noise of the PD. The noise of a PD is independently relative to 1 and 𝐼𝐷 𝑅𝑠ℎ Therefore, high Rsh and low ID minimizes noise. In typical amplifier connections, amplifier noise is heavily dependent on Cj, lower Cj gives lower Photodiode equivalent circuit. noise. 33 SUMMARY Summary WHAT DID WE LEARN TODAY? (a) A schematic diagram of a reverse biased Silicon may be processed into high quality crystals, and it pn junction can be doped to build p and n type silicon regions. photodiode. (b) Net space charge P- and n-type regions in same silicon crystals result in pn- across the diode in the depletion region. junctions. Nd and Na are the donor and acceptor Pn-junctions and MOS structures carry a built-in electric concentrations in the field and a built-in potential difference. p and n sides. (c) The field in the Silicon photodiodes (PDs) and MOS structures are the depletion region. basic building blocks of all digital imaging sensors, electrons and holes generated in the depletion region will produce photo-current (photodiode) or charge storage (CMOS). In a circuit analysis situation, a PD can be conveniently described by using an equivalent circuit. 35 Summary NEXT WEEK Introduction to image sensors in general. CCD, operating principle, characteristics. CMOS, operating principle, characteristics. CCD vs. CMOS. 36 17.9.2024 BM40A1201 DIGITAL IMAGING AND IMAGE PREPROCESSING Light Interaction with Matter and Imaging Sensors Lecture 3: CCD and CMOS image sensors Henri Petrow 2 CONTENT Introduction to image sensors in general CCD, operating principle, characteristics CMOS, operating principle, characteristics CCD vs. CMOS Light detecting silicon photodiode appears nearly black in color since it absorbs efficiently visible photons at all wavelengths. 3 INTRODUCTION Introduction to Image Sensors Introduction to Image Sensors WHAT IS AN IMAGE SENSOR ? Image sensor is an electronic component which converts an optical image presented by the lens into an electronic signal. Image sensors are used in all digital cameras, mobile phones, laptops, tablets, webcams, etc. 3,008 x 2,000 pixels image Practically all image sensors are based on silicon sensor from Nikon D40 microelectronics technology, and light conversion into an digital camera, imaging electric signal is based on a very large matrix of tiny window 23.7 mm × 15.6 mm. photodiodes or MOS structures manufactured on the image sensor. Each square photodiode/CMOS element (we use here ”pixel”) is very small, typically smaller than 5 μm x 5 μm in present image sensor chips. Corner area detail of an imaging sensor. 5 Introduction to Image Sensors HOW DOES A DIGITAL CAMERA WORK ? In principle, a digital camera is structurally similar to an old-fashioned film camera, except that in the same location where film used to be placed, we now have an image sensor. Especially high-end digital cameras still have a mechanical shutter to define exposure time and moment. An image sensor does not solve problems relating to bad optics: for a good digital image, you still need high quality lenses and adequate focal length. Digital cameras use two main image sensor types, CCD and CMOS sensors. Canon EOS 20D digital camera without lens. 6 Introduction to Image Sensors CCD AND CMOS CAMERA IMAGE PROCESSORS Digital camera manufacturers have typically developed their own processor families. This applies at least to the largest consumer camera manufacturers (Canon, Nikon, Sony, Olympus, Panasonic,...). The quality of the final digital image depends largely on 3 key parts of the digital camera: 1) lens system, 2) image sensor, and 3) image processor. Canon advertisement emphasizing the 3 key components. Therefore, it is not uncommon that camera manufactures advertise the combined quality of all these parts. It is important to note that the purpose of the image processors is NOT to exactly reproduce the colors as they appear through the lens. Instead, processor will apply contrast and color saturation boosting and other methods trying to make the photo look good to your eyes ! 7 Introduction to Image Sensors INTRODUCTION TO CCD AND CMOS IMAGE SENSORS CCD stands for Charge-Coupled-Device and refers to its operating principle. CMOS image sensors are based on CMOS (Complementary-Metal-Oxide- Semiconductor) circuits. CCD and CMOS sensors perform the same light conversion, signal accumulation and signal processing steps, but this happens at different locations, and in a different sequence. Operating diagram of CCD and CMOS image sensors. In CCD, light sensitive element is either a MOS structure or a photodiode, in CMOS it is a photodiode. 8 CCD Introduction to CCD Image Sensors Introduction to CCD Image Sensors CCD IMAGE SENSORS CCD stands for Charge-Coupled-Device. In CCD sensor’s MOS or photodiode structures, light is converted into electron-hole pairs and electron charge is stored into each pixel element. Charge packages (here called “buckets”) are moved from one pixel element to the first neighboring element, then to the neighbor’s neighbor etc. Vertical and horizontal CCD “lines” are utilized to move charge buckets out from the CCD pixel matrix, one at a Operating diagram of CCD and CMOS image sensors. time. At the output location, an amplifier converts individual charges into voltages. 10 Introduction to CCD Image Sensors WATER BUCKET ANALOGY Water bucket analogy is often used for explaining the CCD operation. Raindrops collecting into buckets describe integration of light photon induced charge into the pixels. Buckets on parallel bucket conveyor (=VCCDs) move and spill their water onto a serial bucket conveyor. Serial bucket conveyor (=HCCD) spills buckets one at a time on a measuring container (=amplifier). Then next row on the parallel conveyor is spilled on the serial conveyor etc. Water bucket analogy of CCD operation. 11 Introduction to CCD Image Sensors CHARGE TRANSFER Charge transfer in VCCD and HCCD lines is based on a series of control gates, which are sequentially connected at different control voltages (“clocks”) to move the electron charges. Electrons will seek deepest “potential well”, meaning they move to the location where gate voltage is highest. Simplified CCD cross-section (a), surface potentials (b), and clock voltages at t1-t4. Step by step, gate by gate, charges will move forward on the CCD lines. A minimum of 3 clocks is required. 12 Introduction to CCD Image Sensors CCD IMAGE SENSORS CCD sensors are manufactured in three main design variations. 1) Full-frame, 2) Frame transfer and 3) Interline CCDs. Full-frame is the earliest CCD design version, and its main advantage is high fill factor and therefore high sensitivity. Fill factor refers to the percentage of the image Operating principle of a full-frame CCD. sensor surface area, which is active to incoming light; values near 100% are possible for full-frame CCDs. 13 Introduction to CCD Image Sensors FULL FRAME In a full-frame CCD each MOS pixel is at the same time the photosensitive element, charge storage element and a vertical CCD line element. When light is absorbed into each pixel, resulting electron charge is stored until is it transferred by utilizing suitable voltages on the control electrodes built into all pixels. Charge is first transferred vertically (VCCD columns) and then horizontally (HCCD row) to an output amplifier. Full-frame CCD requires an external shutter, otherwise Operating principle of a full-frame CCD. image will be smeared, since image integration and transfer cannot be performed at the same time. 14 Introduction to CCD Image Sensors FRAME TRANSFER In a frame transfer CCD, pixels are similar to the full- frame CCD pixels, but the image sensor chip contains a light protected storage area capable of storing the whole image collected by the actual image section. Utilizing suitable electronics and transfer gates, the image may be quickly moved to the storage section, and a mechanical shutter is not needed. The image section may continue collecting charges for a new image while the previous image is read out to amplifier and camera electronics. Due to the two sections, frame transfer CCDs may be 2X bigger than full-frame CCDs. Operating principle of a frame transfer CCD. 15 Introduction to CCD Image Sensors INTERLINE An interline CCD incorporates separate light sensitive charge storage elements, and charge transfer elements. Light sensitive elements are typically photodiodes. The charge transfer channels are masked with interline masks (see figure) and are therefore not sensitive to light. Each pixel has its own masked storage element adjacent to it and accumulated charge can be rapidly shifted into it after image acquisition has been completed. Operating principle of an interline CCD. Interline CCD does not require an external shutter. 16 Introduction to CCD Image Sensors INTERLINE Interline CCDs have inherently low fill factor since the light masked VCCD and HCCD lines cover considerable percentage of image sensor surface. To improve fill factor, interline CCDs have microlenses placed on the photodiodes to collect light more efficiently. Another graphical presentation of an interline CCD. Microlens on an interline CCD photodiode. 17 Introduction to CCD Image Sensors CCD IMAGE SENSORS, FUNCTIONAL DIAGRAMME CCD functional diagramme. 18 CMOS Introduction to CMOS Image Sensors Introduction to CMOS Image Sensors CMOS IMAGE SENSORS CMOS image sensors are based on CMOS (Complementary-Metal-Oxide- Semiconductor) circuits, hence the name CMOS sensor. In CMOS image sensor’s photodiode structure, light is converted into electron-hole pairs. Contrary to a CCD, a CMOS image sensor has in each pixel an amplifier which converts charge of the photodiode into a voltage. Also, term APS (active pixel sensor) is used to Operating diagram of CCD and CMOS image sensors. denote the fact each pixel has a built-in, active amplifier. 20 Introduction to CMOS Image Sensors CMOS IMAGE SENSORS Signal voltages of individual pixels are transferred further through column signal lines. Column signal wires (f) carry voltages from one pixel at a time, as controlled by the pixel-select switches (e). One row is selected at a time. This is different from a CCD. In addition to the pixel-select switches (e), column- select switches (g) and column circuits (h) are used to control the output of amplified voltages. To output a video signal, all the switches on the Operating diagramme of CMOS APS image sensors. CMOS chip act in a precise sequence. 21 Introduction to CMOS Image Sensors PIXEL STRUCTURES Many CMOS APS pixel structure variations exist, the most common being the 3T and 4T designs; xT refers to the number of transistors. 3T and 4T basic design is similar, but 4T has a buried np-diode and an additional transfer gate TX; two of the 3 or 4 transistors are used as switches. Basic type modern CMOS pixels. 22 Introduction to CMOS Image Sensors MICROLENSES Due to electronics integrated into each pixel, CMOS APS sensors suffer from an inherently low fill factor problem, just like interline CCDs. Again, microlenses are utilized to improve light collection. Note the red color filter in the attached drawing, handling colors in CCD and CMOS sensors will be explained shortly.... 4T CMOS APS pixel cross-section. 23 Introduction to CMOS Image Sensors CMOS IMAGE SENSORS, FUNCTIONAL DIAGRAMME CMOS functional diagram. 24 PIXELS IN IMAGING Colors, Shutters and Binning Colors, Shutters and Binning HOW TO HANDLE COLORS ? CCD and CMOS sensors are inherently not sensitive to light wavelength (they are “monochrome”), they just measure the integrated charge created by light during a certain exposure time defined by a shutter mechanism. Other methods exist, but in most image sensors colors are handled by devoting individual pixels to the primary colors of the RGB color model: red, green and blue. Typically, a Bayer mosaic filter pattern is utilized. There are twice as many pixels for the green color because Bayer filter mosaic, individual pixels are devoted to human eye is more sensitive to green and green color measuring only one color: red, green or blue. accuracy is more important. 26 Colors, Shutters and Binning HOW TO HANDLE COLORS ? Interpolation Image sensors only record red, green and blue color process utilizes intensities in the image, and therefore all other colors signals of several neighboring pixels. must be generated from the RGB primary color signals. Using an interpolation process, the camera computes the actual color of each pixel by combining the color it captured directly through its own filter with the other two colors captured by the pixels around it. The interpolation process obviously requires a lot of computing power, and therefore all digital cameras are equipped with a dedicated image processor circuit. Bayer color filter arrangement on an image sensor. 27 Colors, Shutters and Binning CAMERA SHUTTER Shutter means the device or function which defines how long, and in which way the image sensors is exposed to the optical image presented by the lens. Shutter may be mechanical, as in conventional film cameras, but it may also be an electronic function built into the image sensor. Electronic shutter function in image sensors is often combined to the function of resetting the image sensor or making it ready for recording the next image. In general, shutter functions are divided into global and Global (left) and rolling (right) shutter images. rolling shutters. Shutter function especially effects both the still digital image or digital video quality when object moves fast. 28 Colors, Shutters and Binning CAMERA SHUTTER A global shutter exposes all image sensor pixels simultaneously to the optical image presented by the lens. All pixels start integrating light at the same time and stop integration at the same time. Global shutter is often considered the most accurate choice for representation of motion. Many of the rolling shutter artifacts are avoided. Mechanical global shutter (e.g., in a full frame CCD) Global shutter principle. requires expensive, moving mechanical parts which take up space. Electrical global shutter lowers inherent fill factor (e.g., interline CCD). 29 Colors, Shutters and Binning CAMERA SHUTTER In the case of a rolling shutter function, image sensor rows start and stop light integration at different moments of time. The row which starts first, also stops first, therefore integration time is equal for all rows. Due to the rolling shutter principle, the image is prone to Rolling shutter principle. many different artifacts, especially to smear due to a moving object. Rolling shutter relates to the native operating principle of a CMOS sensor and special functions are required to manufacture CMOS sensors with a global shutter. This can be implemented, at the cost of lower fill factor, by CMOS cell with and without memory, notice fill factor. adding a memory cell in each CMOS pixel. 30 Colors, Shutters and Binning CAMERA BINNING Binning means an image sensor/ digital camera operating mode where signals from several pixels are combined in such a way that the camera operates like it would have larger pixels than it really has. In a CCD image sensor, such function can be realized on the sensor itself, by utilizing a modified clocking scheme which sums the charge collected by several adjacent pixels. In a 2 x 2 CCD binning mode (see drawing), signals from two rows are summed into the serial shift register. Then serial shift register moves signals from two of its consecutive cells to the output node marked “Summed 2 x 2 binning mode in a CCD image sensor. Pixel”. Signals of four neighboring pixels have been summed. 31 Colors, Shutters and Binning CAMERA BINNING In a CMOS image sensor, due to its operating principle, binning cannot be implemented directly on the actual sensor area, unless special circuitry is provided. As an example, such CMOS binning circuit may include transistors enabling signal summing for four neighboring pixels. The key motivation for pixel binning is increasing signal while not increasing noise in equal proportion. Image quality is always a combination of contrast defined Demonstration of effects of binning on VGA image quality. by signal-to-noise ratio, and resolution defined by pixel size, small pixel is not always best! 32 WHICH ONE IS BETTER CCD vs CMOS CCD vs CMOS CCD VS. CMOS, WHICH IS BETTER ? Depends on what you need, and there is no simple answer, but on the consumer market with high volumes, the commercial game is clear: CMOS sensors already cover >97% of the shipped image sensors CCDs will continue to have a role in high end and scientific applications where very low noise and high fill factor are keys to high quality images CMOS image sensors revenue forecast in billions of dollars. Worldwide image sensor market share forecast. 34 CCD vs CMOS WHY IS CMOS DOMINATING THE MARKET? The simple answer is that most image sensors go to applications where cost and low power consumption are key characteristics: mobile phones, laptops/tablets, low-to-medium performance digital cameras..... Some CCD and CMOS characteristics are compared below CCD and CMOS key characteristics: comparison. 35 TERMINOLOGY TERMS AND DEFINITIONS RELATING TO IMAGE SENSORS TERMS AND DEFINITIONS RELATING TO IMAGE SENSORS TERMS AND DEFINITIONS ROI = Region Of Interest, image sensor or camera operating mode where only part of the image/ pixels are utilized Frame = one complete image collected by the image sensor/ camera Frame speed = Readout speed = Imaging frequency: How many frames are collected per second, unit often abbreviated fps (=frames per second) Pixel clock rate = Readout frequency: rate at which data from individual pixels is coming out from sensor/ camera DSLR = Digital Single-Lens Reflex camera is a digital camera where light travels through the lens to a mirror that alternates to send the image to either the viewfinder or the image sensor 37 TERMS AND DEFINITIONS RELATING TO IMAGE SENSORS TERMS AND DEFINITIONS Dark current: signal integrated by a pixel when image sensor is in darkness, in CCDs may be in units e-/pixel/s Full well capacity: maximum charge a CCD storage cell can store without overflow, often in units of e- LVDS = Low-voltage differential signaling: a differential, serial communications protocol. LVDS operates at low power and can run at very high speeds using inexpensive twisted-pair copper cables Image sensor format = the shape and size of the image sensor. Some formats are APS-C, APS-H, Full-frame/Nikon FX/APS-F. Note that full frame in this context means an image sensor that is the same size as a 35 mm (36×24 mm) film frame 38 SUMMARY Summary CCD AND CMOS IMAGE SENSOR SUMMARY Attached drawing summarizes some of the key characteristics of CCD and CMOS image sensors. CCD and CMOS image sensor comparison. 40 Summary CCD IMAGE SENSOR SUMMARY Full frame CCD each MOS pixel is photosensitive element, charge storage element and a vertical CCD line element. high fill factor. always requires an external, global shutter. especially for low light level and scientific applications. simplest CCD design. Frame transfer CCD pixel same of very similar to full frame. high fill factor. does not necessarily need an external shutter but is still subject to smear at high object speeds. CCD design comparison. large chip area, therefore, clearly more expensive. 41 Summary CCD IMAGE SENSOR SUMMARY Interline CCD separate light sensitive charge storage elements, and charge transfer elements. inherently low fill factor, microlenses required. built-in global shutter, best CCD for fast moving objects consumer applications. most complicated of the CCD designs. CCD design comparison. 42 Summary CMOS IMAGE SENSOR SUMMARY APS CMOS Light sensitive and charge storing element is a np- photodiode. Charge to voltage amplifier integrated into every pixel May be addressed like a memory chip. Typical pixel designs include np-photodiode and 3 or 4 transistors. Basic designs not very different, but functions built CMOS design comparison. into the sensor chip may vary. Inherently has a rolling shutter function. 43 Summary NEXT WEEK Start of the second part of the course: Illumination, Image formation and Spectroscopy. Last exercise session on my part. Lectures and exercises for the second part will be held by Erik Vartiainen. In physical classrooms at LUT. Check rooms from TimeEdit. 44 Program on weeks 39 – 42: Lecturer: associate prof. Erik Vartiainen Optics and photonics: Lecture 4: Imaging optics: geometrical optics 1 Lecture 5: Imaging optics: geom. opt. 2 & radiometry and photometry Lecture 6: Laser Imaging 1; Fluorescence microscopy and nanoscopy Lecture 7: Laser Imaging 2: Nonlinear Optical Imaging Lecture 4 Imaging optics: geometrical optics Part 1 ▪ Ray optics ▪ Paraxial approximation ▪ Ray matrices ▪ Lens and lens makers formulas Ray Optics axis Light rays are defined as directions in space, corresponding, roughly, to k-vectors of light waves. Each optical system will have an axis, and all light rays will be assumed to propagate at small angles to it. This is called the Paraxial Approximation. Is geometrical optics the whole story? No. We neglect the phase. ~0 Geometrical optics: → infinitely good spatial resolution. The true picture: The smallest possible focal spot is about the wavelength, l. Same for the best spatial resolution of an >l image. This is fundamentally due to the wave nature of light, which has not been included in geometrical optics. Geometrical optics (ray optics) is the simplest version of optics. Ray optics The Optic Axis A mirror deflects the optic axis into a new direction. This ring laser has an optic axis that scans out a rectangle. Optic axis A ray propagating through this system We define all rays relative to the relevant optic axis. The Ray xin, qin Vector xout, qout A light ray can be defined by two co-ordinates: its position, x q its slope, q x Optical axis These parameters define a ray vector,  x which will change with distance and as q  the ray propagates through optics.   Ray Matrices For many optical components, we can define 2 x 2 ray matrices. An element’s effect on a ray is found by multiplying its ray vector. Ray matrices can describe Optical system ↔ 2 x 2 Ray matrix simple and com- plex systems.  xin  A B  xout  q  C D  q   in    out  These matrices are often (uncreatively) called ABCD Matrices. How to find a ray matrix Next, we’ll show how to find a ray matrix for an optical component. We start by realizing that the output displacement, 𝑥𝑜𝑢𝑡, is a function of input displacement and input angle: xout = xout ( xin , qin ) Same goes for the output angle: q out = q out ( xin ,qin ) Ray matrices as derivatives Since the displacements and angles are assumed to be small, we can think in terms of partial derivatives. xout xout xout xout dxout = dxin + dqin  xout = xin + qin xin qin xin qin q out q out q out q out dq out = dxin + dqin  q out = xin + qin xin qin xin qin where we have used the paraxial approximation as: dxi  xi and dqi  qi Ray matrices as derivatives Since the displacements and angles are assumed to be small, we can think in terms of partial derivatives. xout x xout = xin + out qin  A xin + B qin xin qin q out q out q out = xin + qin  C xin + D qin xin qin in matrix form: spatial xout xout magnification xin q in  xout   A B   xin  q  = C D  q   out     in  q out q out angular xin q in magnification For cascaded elements, we simply multiply ray matrices.  xin   xout  q  O1 O2 O3 q   in   out   xout     xin     xin  q  = O3 O2  O1 q    = O3 O2 O1 q   out     in     in  Notice that the order looks opposite to what it should be, but it makes sense when you think about it. Ray matrix for free space or a medium If xin and qin are the position and slope upon entering, let xout and qout be the position and slope after propagating from z = 0 to z. xout = xin + z qin xout qout xin, qin qout = qin Rewriting these expressions in matrix notation: z=0 z  xout  1 z   xin  q  = 0 1  q   out     in  1 z  Ospace =    0 1  Ray Matrix for an Interface At the interface, clearly: qout xout = xin qin xin xout n1 n2 Now calculate qout: Snell's Law says: n1 sin(qin) = n2 sin(qout) which becomes for small angles: n1 qin = n2 qout  qout = [n1 / n2] qin 1 0  Ointerface =  0 n1 / n2  Ray matrix for a curved interface At the interface, again: R xout = xin. qs qout q1 q2 To calculate qout, we must qs calculate q1 and q2. qin xin qs = xin /R If qs is the surface slope at n1 n2 z the height xin, then q1 = qin+ qs and q2 = qout+ qs q1 = qin+ xin / R and q2 = qout+ xin / R Snell's Law: n1 q1 = n2 q2  n1 (qin + xin / R) = n2 (qout + xin / R)  qout = (n1 / n2 )(qin + xin / R) − xin / R  1 0   qout = (n1 / n2 )qin + (n1 / n2 − 1) xin / R Ocurved =   ( n interface  1 2 / n − 1) / R n / n 1 2 Example 1. Determine the focal length for the concave surface, whose radius is 0.50 m, separating a medium with refractive index 1.20 from another with index 1.60. Solution The solution of convex surface can be used here for concave surface by changing n1 to n2 and vice versa in the corresponding ABCD matrix: 1 n  n − n2 R = −C = −  2 − 1  / R = 1 f qs qout  n1  n1R q1 q2 qs qin xin qs = xin /R  f= n1R = ( 1.20 )( 0.50 m ) = −1.50 m z n1 n2 n1 − n2 1.20 − 1.60  1 0  A B Ocurved =    C D  interface ( n  1 2 / n − 1) / R n1 / n2   A thin lens is just two curved interfaces. We’ll neglect the glass in between (it’s a R1 R2 really thin lens!), and we’ll take n1 = 1.  1 0  n≠1 Ocurved =  n=1 n=1 interface ( n  1 2 / n − 1) / R n1 / n2  1 0  1 0  Othin lens = Ocurved Ocurved =   (1/ n − 1) / R 1/ n  interface 2 interface 1  ( n − 1) / R2 n  1   1 0   1 0 =  =   ( n − 1) / R2 + n (1/ n − 1) / R 1 n (1/ n )   ( n − 1) / R2 + (1 − n ) / R1 1   1 0  1 0 =  This can be written:  −1/ f  ( n − 1)(1/ R 2 − 1/ R1 ) 1   1  where: 1/ f = (n − 1)(1/ R1 − 1/ R2 ) The Lens-Maker’s Formula Ray matrix for a thin lens  1 0 Olens =   1/ f = (n − 1)(1/ R1 − 1/ R2 ) -1/f 1 The quantity, f, is the focal length of the lens. It’s the single most important parameter of a lens. It can be positive or negative. R1 > 0 R1 < 0 R2 < 0 f>0 R2 > 0 f 0, the lens deflects If f < 0, the lens deflects rays toward the axis. rays away from the axis. Ray matrix for a thick lens  1 0  O1 =    (1 n − 1) / R 1 1 n  1 d  O2 =    0 1   1 0 O3 =    ( n − 1) / R 2 n  A B OThick lens = O3O2O1 =    C D  → focal length: 1  1 1 (n − 1) d  = −C = (n − 1)  − +  f R  1 R2 nR R 1 2  Types of lenses Lens nomenclature Which type of lens to use (and how to orient it) depends on the aberrations and application. Example 2. Show that a plano-convex lens, where one of the radiuses in infinite, can always be considered as a thin lens: Solution: Lens-makers formula for lens with thickness d and R2 =  : Now 1 = 0, R2 so using the formula of a thich lens, we get: 1 1 1 (n − 1)d  1 = (n − 1)  − +  = (n − 1)   , f  R1 R2 nR1R2   R1  which is a lens-makers formula for a thin lens, where R2 = . Example 3. Negative meniscus: Find the ray matrix for a thick lens (in air) and its focal length, where R1 = 45cm, R2 = 30 cm, d = 5cm, n = 1.60. Solution  1 0 1 d  1 0  OLens = O3O2O1 =        (n − 1) / R2 n   0 1   (1 n − 1) / R1 1 n   1 0  1 5  1 0  =     0.6 30 1.6  0 1  (1 1.6 − 1) / 45 1 1.6   115 50   1 0   1 0   1 0      1 5     120 16  = 1 1  = 1  1.6 0 1  − 1 1.6   1 10        50   120 1.6   50  −   120 16   115 50   23 25    = 120 16   24  8   A B  =    23 16 1 16   7 17   C D   − +     1200 1200 16 16   1200 16  Focal length 1 1200 f =− =− cm  −171 cm  0 , i.e. the lens is so-called negative meniscus. C 7 Microlens Microlens

Use Quizgecko on...
Browser
Browser