Digital Image Processing, 4th ed. PDF
Document Details
Gonzalez & Woods
Tags
Summary
This document is an excerpt from the book "Digital Image Processing, 4th ed." by Gonzalez & Woods. The chapter is about digital image fundamentals, going into topics of human vision, image formation, and brightness adaptation. The summary also contains the author information and publication details.
Full Transcript
Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Di...
Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals STRUCTURE OF THE HUMAN EYE © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals The innermost membrane of the eye is the retina. When the eye is focused, light from an object is imaged on the retina. Pattern vision is afforded by discrete light receptors distributed over the surface of the retina: cones and rods. Cones are located primarily in the central portion of the retina, called the fovea, and are highly sensitive to color. Humans can resolve fine details because each cone is connected to its own nerve end. Cone vision is called photopic or bright-light vision. Rods are distributed over the retina. Rods capture an overall image of the field of view. They are not involved in color vision and are sensitive to low levels of illumination. Rods vision is known as scotopic or dim-light vision. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals IMAGE FORMATION IN THE EYE In a camera, the lens has a fixed focal length. Focusing at various distances is achieved by varying the distance between the lens and the imaging plane, where the film (or imaging chip in the case of a digital camera) is located. In the human eye, the converse is true; the distance between the center of the lens and the imaging sensor (the retina) is fixed, and the focal length needed to achieve proper focus is obtained by varying the shape of the lens. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals BRIGHTNESS ADAPTATION AND DISCRIMINATION Experimental evidence indicates that subjective brightness (intensity as perceived by the human visual system) is a logarithmic function of the light intensity incident on the eye. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals LIGHT AND THE ELECTROMAGNETIC SPECTRUM © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals The colors perceived in an object are determined by the nature of the light reflected by the object. A body that reflects light relatively balanced in all visible wavelengths appears white to the observer. Light that is void of color is called monochromatic (or achromatic) light. The only attribute of monochromatic light is its intensity. Because the intensity of monochromatic light is perceived to vary from black to grays and finally to white The term gray level is used commonly to denote monochromatic intensity. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals Chromatic (color) light spans the electromagnetic energy spectrum from approximately 0.43 to 0.79 mm In addition to frequency, three other quantities are used to describe a chromatic light source: radiance, luminance, and brightness. Radiance is the total amount of energy that flows from the light source, and it is usually measured in watts (W). Luminance, measured in lumens (lm), gives a measure of the amount of energy an observer perceives from a light source. Brightness is a subjective descriptor of light perception that is practically impossible to measure. It embodies the achromatic notion of intensity. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals IMAGE SENSING AND ACQUISITION Most of the images in which we are interested are generated by the combination of an “illumination” source and the reflection or absorption of energy from that source by the elements of the “scene” being imaged. Three principal sensor arrangements used to transform incident energy into digital images. Incoming energy is transformed into a voltage by a combination of the input electrical power and sensor material that is responsive to the type of energy being detected. The output voltage waveform is the response of the sensor, and a digital quantity is obtained by digitizing that response. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals A SIMPLE IMAGE FORMATION MODEL we denote images by two-dimensional functions of the form f(x, y). The value of f at spatial coordinates (x, y) is a scalar quantity whose physical meaning is determined by the source of the image ﺗﺗﻧﺎﺳب ﻣﻧﺑﻌﺛﮫby a physical source (e.g., electromagnetic waves). Values are proportional to energy radiated f (x, y) must be nonnegative and finite: © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals IMAGE SAMPLING AND QUANTIZATION To create a digital image, we need to convert the continuous sensed data into a digital format. This requires two processes: sampling and quantization An image may be continuous with respect to the x- and y-coordinates, and also in amplitude. To digitize it, we have to sample the function in both coordinates and amplitude (intensity level). Digitizing the coordinate values is called sampling. Digitizing the amplitude values is called quantization. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals The method of sampling is determined by the sensor arrangement used to generate the image. The quality of a digital image is determined to a large degree by the number of samples and discrete intensity levels used in sampling and quantization. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals REPRESENTING DIGITAL IMAGES Suppose that we sample the continuous image into a digital image, f (x, y), containing M rows and N columns, where (x, y) are discrete coordinates. For notational clarity and convenience, we use integer values for these discrete coordinates: x = 0, 1, 2,…,M−1 and y = 0, 1, 2,…, N-1 In general, the value of a digital image at any coordinates (x, y) is denoted f (x, y), where x and y are integers. When we need to refer to specific coordinates (i, j), we use the notation f(i, j), where the arguments are integers. The section of the real plane spanned by the coordinates of an image is called the spatial domain, with x and y being referred to as spatial variables or spatial coordinates. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals REPRESENTING DIGITAL IMAGES © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals REPRESENTING DIGITAL IMAGES Each element of this array is called an image element, picture element, pixel © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals REPRESENTING DIGITAL IMAGES The origin of an image at the top left corner. Choosing the origin of f(x, y) at that point makes sense mathematically because digital images in reality are matrices. Some programming languages (e.g., MATLAB) start indexing at 1 instead of at 0 © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals REPRESENTING DIGITAL IMAGES To express sampling and quantization in more formal mathematical terms, let Z and R denote the set of integers and the set of real numbers, respectively. The sampling process may be viewed as partitioning the xy-plane into a grid, with the coordinates of the center of each cell in the grid being a pair of elements from the Cartesian product Z 2 (also denoted Z × Z) which is the set of all ordered pairs of elements (zi ,zj ) with zi and zj being integers from set Z. Hence, f(x, y) is a digital image if (x, y) are integers from Z2 and f is a function that assigns an intensity value (that is, a real number from the set of real numbers, R) to each distinct pair of coordinates (x, y): This functional assignment is the quantization process. If the intensity levels also are integers, then R = Z, and a digital image becomes a 2-D function whose coordinates and amplitude values are integers. À Image digitization requires that decisions be made regarding the values for M, N, and for the number, L, of discrete intensity levels. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals The range of values spanned by the gray scale is referred to as the dynamic range Dynamic range: the ratio of the maximum measurable intensity to the minimum detectable intensity level in the system. The upper limit is determined by saturation and the lower limit by noise, although noise can be present also in lighter intensities. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals Image contrast: the difference in intensity between the highest and lowest intensity levels in an image. The contrast ratio: the ratio of these two quantities. When an appreciable number of pixels in an image have a high dynamic range, we can expect the image to have high contrast. Conversely, an image with low dynamic range typically has a dull, washed-out gray look. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals When an image can have 2k possible intensity levels, it is common practice to refer to it as a “k-bit image,” (e,g., a 256-level image is called an 8-bit image). © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals BASIC RELATIONSHIPS BETWEEN PIXELS NEIGHBORS OF A PIXEL A pixel p at coordinates (x,y) has two horizontal and two vertical neighbors with coordinates: (x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1) This set of pixels, called the 4-neighbors of p, is denoted N4(p). The four diagonal neighbors of p have coordinates (x + 1, y + 1), (x + 1, y − 1), (x − 1, y + 1), (x − 1, y − 1) and are denoted ND(p). These neighbors, together with the 4-neighbors, are called the 8-neighbors of p, denoted by N8(p). The set of image locations of the neighbors of a point p is called the neighborhood of p. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi Digital Image Processing, 4th ed. Gonzalez & Woods www.ImageProcessingPlace.com Chapter 2 Digital Image Fundamentals BASIC RELATIONSHIPS BETWEEN PIXELS ADJACENCY, CONNECTIVITY, REGIONS, AND BOUNDARIES A pixel p at coordinates (x,y) has two horizontal and two vertical neighbors with coordinates: (x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1) This set of pixels, called the 4-neighbors of p, is denoted N4(p). The four diagonal neighbors of p have coordinates (x + 1, y + 1), (x + 1, y − 1), (x − 1, y + 1), (x − 1, y − 1) and are denoted ND(p). These neighbors, together with the 4-neighbors, are called the 8-neighbors of p, denoted by N8(p). The set of image locations of the neighbors of a point p is called the neighborhood of p. © 1992–2008 R. C. Gonzalez & R. E. Woods Edited by: Dr. Ahmad Alzou’bi