Color-Imaging Systems PDF
Document Details
![FascinatingBowenite778](https://quizgecko.com/images/avatars/avatar-14.webp)
Uploaded by FascinatingBowenite778
Tags
Summary
This chapter details the fundamental principles of color imaging systems, explaining how color stimuli are captured and reproduced. It covers image capture techniques, including electronic cameras and photographic film, and the importance of trichromatic capture for accurate color reproduction. The chapter also explores the role of signal processing and image formation in color imaging systems.
Full Transcript
## 2 Color-Imaging Systems In Chapter 1 it was shown that, for the purposes of color measurement, scenes and other images can be characterized in terms of their color stimuli. In this chapter, the fundamental principles of how such color stimuli can be captured and reproduced by color-imaging syst...
## 2 Color-Imaging Systems In Chapter 1 it was shown that, for the purposes of color measurement, scenes and other images can be characterized in terms of their color stimuli. In this chapter, the fundamental principles of how such color stimuli can be captured and reproduced by color-imaging systems will be discussed. Color-imaging systems can be built using an almost unlimited variety of optical, chemical, and electronic components. But regardless of what technologies they incorporate, all imaging systems must perform three basic functions: image capture, signal processing, and image formation (Figure 2.1). These functions are the building blocks of all color-imaging systems, from the simplest to the most complex. ### Image capture To form a reproduction, an imaging system first must detect light from each original color stimulus and, from that light, produce a detectable image signal. This function, called image capture, can be realized in a number of ways, depending on the technology of the particular imaging system. For example, an electronic camera, such as a digital still camera, might use a solid-state image sensor, such as a charge-coupled device (CCD), to detect light. Image capture occurs as photons of light are absorbed by the sensor, resulting in the generation of electrons. These electrons are collected into charge packets, and an image signal is produced by a sequential readout of those packets. In a photographic film, light is captured in the form of a latent image. The latent image, composed of small clusters of metallic silver, is produced as photons of light strike the light-sensitive silver halide grains of the film. This chemical signal is detected and amplified during subsequent chemical processing. (Refer to Appendix C for more details on photographic media.) Accurate color reproduction requires image capture that is, like the human eye, trichromatic. As part of the capture process, then, the spectral content of original color stimuli must be separated to form three distinguishable color signals. This generally is accomplished by some form of trichromatic image capture. In some special applications, however, more than three color channels are captured and trichromatic color signals are subsequently derived. In an electronic camera, trichromatic capture can be accomplished using a solid-state sensor composed of a mosaic of light-sensitive elements. Individual sensor elements are overlaid with red, green, or blue filters (Figure 2.2a). Some high-end video and digital motion picture cameras use three sensors (for higher spatial resolution) and an appropriate arrangement of beam splitters and color filters (Figure 2.2b). Trichromatic capture is accomplished in color photographic media by the use of overlaid light-sensitive layers. In the simplified film cross-section shown in Figure 2.3, the top layer records blue light, the middle layer records green light, and the bottom layer records red light. A color film actually may contain a total of 12 or more image-forming and other special purpose layers, but its behavior is fundamentally the same as that of a simple three-layer film. The color characteristics of trichromatic capture are determined by the spectral responsivities (relative responses to light as a function of wavelength) of the particular system. These responsivities typically will differ among systems. For example, Figure 2.4 compares the red, green, and blue spectral respon- sivities for a particular photographic film and a par- ticular digital camera. The individual color responses, called exposures, produced by a given set of spectral responsivities can be calculated using the following equations: $Rexp = k_{cr} \int_{\lambda} S (\lambda) R (\lambda) r_c (\lambda) d\lambda$ $Gexp = k_{cg} \int_{\lambda} S (\lambda) R (\lambda) g_c (\lambda) d\lambda$ This white-point normalization is the equivalent of performing a white balance adjustment on an electronic camera, where the red, green, and blue image-capture signals are adjusted independently such that equal RGB reference voltages are produced when a reference white object is imaged or measured. With the application of the three normalization factors, the computed exposure values become relative val- ues, properly referred to as exposure-factor values. Equations (2.1) essentially are the same in form as those used for computing CIE XYZ tristimulus values (Chapter 1, Equations (1.2)). In fact, if the red, green, and blue spectral responsivities of the image-capture stage of an imaging system corre-sponded to the color-matching functions of the CIE Standard Colorimetric Observer, the resulting RGB exposure-factor values would be equivalent to CIE XYZ tristimulus values. In other words, the image- capture device or medium essentially would be a colorimeter. This raises an interesting question. For accurate color reproduction, should the spectral responsiv- ities of an imaging system always be designed to match those of a standard human observer? The answer is not as straightforward as it might seem. This question will be revisited in Part II, where a closer look will be taken at the second basic function per- formed by all imaging systems: signal processing. ### Signal processing Signal processing modifies image signals produced from image capture to make them suitable for pro- ducing a viewable image. For example, an image- $Bexp = k_{cb} \int_{\lambda} S (\lambda) R (\lambda) b_c (\lambda) d\lambda$ where Rexp, Gexp, and Bexp are red, green, and blue ex- posure values; S (λ) is the spectral power distribution of the light source; R (λ) is the spectral reflectance (or transmittance) of the object; rc (λ), g_c (λ), and bc (λ) are the red, green, and blue spectral respon- sivities of the image-capture device or medium; and k_{cr}, k_cg, and k_cb are normalizing factors. These factors usually are determined such that Rexp, Gexp, and Bexp = 1.0 when the object is a perfect white. capture signal is electro-nically processed and amplified for trans- mission. A home television receiver performs further signal processing to produce signals appropriate for driving its particular type of display. In a photo- graphic film, signal processing occurs as the film is chemically processed. (Chemical photographic pro- cessing sometimes is referred to as “developing.” However, image development actually is just one of several steps in the overall chemical process, so the term is not strictly correct and will not be used here.) Signal processing typically includes linear and nonlinear transformations operating on the indi- vidual color signals. A linear transformation of an individual color signal might effect a simple ampli- fication, which usually is required to produce a sig- nal sufficiently strong to generate a viewable image. Nonlinear transformations of the individual color signals primarily are used to control the grayscale of the image produced by the system. The grayscale is a measure of how a system reproduces a series of neutral colors, ranging from black to white. In Part II, it will be shown that the grayscale characteristic is one of the most important properties of any imaging system, and the reasons why the signal processing as- sociated with the grayscale must be highly nonlinear will be discussed. Signal processing also is used to create linear and nonlinear interactions (crosstalk) among the individ- ual color signals. For example, a modified red signal might be formed by taking portions of the green and blue signals and adding them to or subtracting them from the original red signal. In Part II, the reasons why such interactions are necessary will be discussed. Signal processing also may include spatial operations such as image compression, sharpening, and noise reduction. While spatial operations may not directly affect color, they are a factor that must be considered in developing the appropriate form of digital color encoding for a given system. ### Image formation The ultimate goal of a color-imaging system is, of course, to produce a viewable image. That is ac- complished by the final stage of the system, image formation, where processed image signals are used to control the color-forming elements of the output medium or device. Although there are many types of color-imaging media, color-imaging devices, and color-image-forming technologies, virtually all prac-tical image-formation methods fall into one of two basic categories: additive color or subtractive color. In the formation of most additive color images, processed image signals are used to directly control the intensities of primary colored lights that make up the displayed image. Colors are produced by ad- ditive mixing (Figure 2.5). A three-beam digital cin-ema projector, for example, forms images by addi- tively combining modulated intensities of red, green, and blue lights on a projection screen. This directly generates color stimuli, so no other light source is re- quired for viewing. Color CRTs, LCD panels, plasma panels, and similar direct-view display devices form images by generating adjacent pixels of red, green, and blue light. The mixing of that red, green, and blue light subsequently takes place within the visual system of the observer. In the formation of subtractive color images, pro- cessed image signals control the amounts of three or more colorants (dyes, inks, or pigments) that selec- tively absorb (subtract) light of specific wavelength regions (Figure 2.6). Photographic media, for exam- ple, use cyan, magenta, and yellow (CMY) image- forming dyes to absorb red, green, and blue light, respectively. Many printing processes use CMY inks plus an additional black ink (the K of CMYK). An image formed by subtractive colorants is an ob- ject, which requires a light source for viewing. The spectral properties of the color stimuli produced from a subtractive image will change if the spec- tral power distribution of the viewing light source is changed. A complete imaging system can be defined as any combination of devices and/or media capable of performing all three basic functions described in this chapter. For example, the combination of a dig-ital still camera (image capture), a computer (signal processing), and a monitor (image formation) forms a complete system. Somewhat less obvious is that, by this definition, photographic media—such as color-slide and negative films—also are complete imaging systems. That fact will be extremely important to remember later when the problems of color encod-ing in hybrid systems, which combine photographic media, other types of hardcopy media, and various types of electronic devices, are addressed. ### Summary of key issues - All imaging systems must perform three basic functions: image capture, signal processing,and image formation. - An imaging system can be any combination of de- vices and/or media that is capable of performing these three functions. - Color imaging requires image capture that is (at least) trichromatic. - Signal processing modifies image signals, pro- duced by image capture, to make them suitable for forming a viewable image. - In the image-formation stage, processed image sig- nals control the amounts of the color-forming ele-ments produced by the output device or medium. - Color-image formation can be accomplished ei- ther by additive or by subtractive color techniques.