Image Processing: Resolution and Interpolation PDF
Document Details
C.I. Suez
Dr. Reham Amin
Tags
Summary
This document is a set of lecture notes on image processing, focusing on resolution and interpolation techniques. It introduces the concept of digital image formation, including sampling and quantization processes. The document also explains spatial and intensity resolution parameters and the false contouring problem.
Full Transcript
# IMAGE PROCESSING ## By: DR Reham Amin ## Email: Reham [email protected] ## Author: Dr. Reham Amin # Chapter 2: Digital Image Fundamentals ## Resolution and Interpolation ## Author: Dr. Reham Amin # A SIMPLE IMAGE FORMATION MODEL * Illumination (energy) source * Scene * Imaging system (Inter...
# IMAGE PROCESSING ## By: DR Reham Amin ## Email: Reham [email protected] ## Author: Dr. Reham Amin # Chapter 2: Digital Image Fundamentals ## Resolution and Interpolation ## Author: Dr. Reham Amin # A SIMPLE IMAGE FORMATION MODEL * Illumination (energy) source * Scene * Imaging system (Internal image plane) * Output (digitized image) ## Author: Dr. Reham Amin ### FIGURE 2.15: An example of digital image acquisition. * (a) Illumination (energy) source. * (b) A scene. * (c) Imaging system. * (d) Projection of the scene onto the image plane. * (e) Digitized image. # A SIMPLE IMAGE FORMATION MODEL * An image is denoted by a 2D function of the form f(x,y). * The value or amplitude of f at spatial coordinates (x,y) is a positive scalar quantity whose physical meaning is determined by the source of the image. * When an image is generated by a physical process, its values are proportional to energy radiated (e.g., electromagnetic waves) by a physical source. * As a consequence, f(x,y) must be nonzero and finite; that is $0 ≤ f(x,y) < ∞$ ## Author: Dr. Reham Amin # A SIMPLE IMAGE FORMATION MODEL * The function f(x,y) may be characterized by two components: * The amount of the source illumination incident on the scene being viewed. i(x,y) * The amount of the source illumination reflected back by the objects in the scene r(x,y) * These are called illumination and reflectance components and are denoted by i(x,y) an r (x,y) respectively. * The functions combine as a product to form f(x,y): * $f(x, y) = i(x, y)r(x,y)$ * $0 ≤ i(x,y) < ∞$ * $0≤r(x, y) ≤ 1$ * Thus, reflectance is bounded by 0 (total absorption) and 1 (total reflectance). * The nature of i(x,y) is determined by the illumination source, and r(x,y) is determined by the characteristics of the imaged objects. ## Author: Dr. Reham Amin # A SIMPLE IMAGE FORMATION MODEL * The intensity of a monochrome image at any coordinates (x,y) is called the gray level (I) of the image at that point I= f (x, y) * $Lmin ≤≤ Lmax$ * L(Min) is to be positive and Lmax must be finite * L(Min) = i(Min) r(Min) and L(Max)= i(Max) r(Max) * The interval [Lmin, Lmax] is called the intensity (gray) scale. * Common practice is to shift this interval numerically to the interval [0, 1], or [0, C] where C is considered white on the gray scale. * where I=0 is considered black and I= 1 or C is considered white on the gray scale. * All intermediate values are shades of gray varying from black to white. ## Author: Dr. Reham Amin # Converting continuous sensed data into digital from * Illumination (energy) source * Scene * Imaging system (Internal image plane) * Output (digitized image) ## Author: Dr. Reham Amin ### FIGURE 2.15: An example of digital image acquisition. * (a) Illumination (energy) source. * (b) A scene. * (c) Imaging system. * (d) Projection of the scene onto the image plane. * (e) Digitized image. # SAMPLING AND QUANTIZATION * To create a digital image, we need to convert the continuous sensed data into digital from. * This involves two processes - sampling and quantization. * An image may be continuous with respect to the x and y coordinates and also in amplitude f(x,y). To convert it into digital form we have to sample the function in both coordinates and in amplitudes. * Digitalizing the coordinate values is called sampling. * Digitalizing the amplitude values is called quantization. ## Author: Dr. Reham Amin # SAMPLING * Digitalizing the coordinate values is called sampling. * There is a continuous the image along the line segment AB. * To simple this function, we take equally spaced samples along line AB. The location of each samples is given by a vertical tick back (mark) in the bottom part. * The samples are shown as block squares superimposed on function the set of these discrete locations gives the sampled function. * Starting at the top of the image and covering out this procedure line by line produces a two dimensional digital image. ## Author: Dr. Reham Amin ### FIGURE 2.16: * (a) Continuous image. * (b) A scan line showing intensity variations along line AB in the continuous image. * (c) Sampling and quantization. * (d) Digital scan line. (The black border in (a) is included for clarity. It is not part of the image). ## Author: Dr. Reham Amin # SAMPLING * Digitalizing the coordinate values is called sampling. * There is a continuous the image along the line segment AB. * To simple this function, we take equally spaced samples along line AB. The location of each samples is given by a vertical tick back (mark) in the bottom part. * The samples are shown as block squares superimposed on function the set of these discrete locations gives the sampled function. * Starting at the top of the image and covering out this procedure line by line produces a two dimensional digital image. ## Author: Dr. Reham Amin ### FIGURE 2.17: * (a) Continuous image projected onto a sensor array. * (b) Result of image sampling and quantization. # SAMPLING AND QUANTIZATION * Digitalizing the amplitude values is called quantization. * In order to form a digital, the gray level values must also be converted (quantized) into discrete quantities. * So we divide the gray level scale into eight discrete levels ranging from eight level values. * The continuous gray levels are quantized simply by assigning one of the eight discrete gray levels to each sample. * Starting at the top of the image and covering out this procedure line by line produces a two dimensional digital image. ## Author: Dr. Reham Amin # Digital Image Definition * A digital image described in a 2D discrete space $f(x,y) = \begin{bmatrix} f(0,0) & f(0,1) & ... &f(0,N-1) \\ f(1,0) & f(1,1) & ... &f(1,N-1) \\ \vdots & \vdots & \ddots & \vdots \\ f(M-1,0) & f(M-1,1) & ... & f(M-1,N-1) \end{bmatrix}$ is derived from an analog image f(x,y) in a 2D continuous space through a sampling process that is frequently referred to as digitization. * The 2D continuous image f(x,y) is divided into (M*N) M rows and N columns. * The intersection of a row and a column is termed a pixel. * The value assigned to the integer coordinates (m,n) with m=0,1,2..M-1 and n=0,1,2...N-1 is f(m,n). In fact, in most cases, is actually a function of many variables including depth, color and time (t). ## Author: Dr. Reham Amin # Digital Image Definition ## Author: Dr. Reham Amin ### FIGURE 2.19: Coordinate convention used to represent digital images. Because coordinate values are integers, there is a one-to-one correspondence between x and y and the rows (r) and columns (c) of a matrix. # What is discrete/gray/intensity resolution for digital image? * Image digitization requires that decisions be made regarding the values for M, N, and for the number, L, of discrete intensity levels: * M and N must be positive * the number of intensity levels, L, must be an integer power of two; * $L=2^k$ * where k is an integer indicating that the discrete levels are equally spaced and that they are integers in the range [0,1] * When an image can have 2k gray levels, it is referred to as "k- bit image". * An image with 256 possible gray levels is called an "8- bit image" (256=28). * For example, it is common to say that an image whose intensity is quantized into 256 levels has 8 bits of intensity resolution. ## Author: Dr. Reham Amin # How many b bits needed to store digital image? * The number, b, of bits required to store a digital image is * $b=MXNXk$ * When M = N, this equation becomes * $b = N²k$ ## Author: Dr. Reham Amin # Spatial and Intensity Resolution * For instance, a digital camera with a 20-megapixel CCD imaging chip can be expected to have a higher capability to resolve detail than an 8-megapixel camera, assuming that both cameras are equipped with comparable lenses and the comparison images are taken at the same distance. * Dots per unit distance is a measure of image resolution used in the printing and publishing industry. In the U.S., this measure usually is expressed as dots per inch (dpi). * To give you an idea of quality, * newspapers are printed with a resolution of 75 dpi, * magazines at 133 dpi, * glossy brochures at 175 dpi, * and the book page at which you are presently looking was printed at 2400 dpi ## Author: Dr. Reham Amin # Spatial and Intensity Resolution * Spatial resolution is the smallest discernible details are an image. * It considers discretion regarding the number of spatial samples (pixels) used to generate a digital image * For instance, 20-megapixel CCD imaging chip can be expected to have a higher capability to resolve detail than an 8-megapixel camera, * Intensity resolution refers to smallest discernible change in intensity levels. * It is influenced by: * noise and saturation values, * and by the capabilities of human perception to analyze and interpret details in the context of an entire scene * For example, it is common to say that an image whose intensity is quantized into 256 levels has 8 bits of intensity resolution. * Unlike spatial resolution, which must be based on a per-unit-of-distance basis to be meaningful, it is common practice to refer to the number of bits used to quantize intensity as the "intensity resolution." ## Author: Dr. Reham Amin # Spatial and Intensity Resolution ### FIGURE: Effects of reducing spatial resolution. The images shown are at: (a) 930 dpi, (b) 300 dpi, (c) 150 dpi, and (d) 72 dpi. ## Author: Dr. Reham Amin # Spatial resolution: * It is a measure of the smallest discernible detail in an image. * Spatial resolution can be: * line pairs per unit distance, and * dots (pixels) per unit distance * Dots per unit distance is a measure of image resolution used commonly in the printing and publishing industry, expressed as dots per inch (dpi). * For example: * Newspapers are printed with a resolution of 75 dpi, * Magazines at 133 dpi, * Glossy brochures at 175 dpi, and * Book page is printed at 2400 dpi. ## Author: Dr. Reham Amin * Effects of Reducing Spatial Resolution ## Author: Dr. Reham Amin # Intensity resolution: * It refers to the smallest discernible change in intensity level. * The number of intensity levels usually is an integer power of two * The most common number is 8 bits, with 16 bits being used in some applications. * Intensity quantization using 32 bits is rare. * For example, it is common to say that an image whose intensity is quantized into 256 levels has 8 bits of intensity resolution. ## Author: Dr. Reham Amin # Intensity levels & number of bits * The more intensity levels used, the finer the level of detail discernable in an image * Intensity level resolution is usually given in terms of the number of bits used to store each intensity level |Number of Bits | Number of Intensity Levels | Examples| |:---:|:---:|:---:| |1 | 2 | 0,1| |2 | 4 | 00, 01, 10, 11| |4 | 16 | 0000, 0101, 1111| |8 | 256 | 00110011, 01010101| |16 | 65,536 | 1010100011010101| ## Author: Dr. Reham Amin # Effects of Reducing Intensity Resolution ## Author: Dr. Reham Amin # Effects of Reducing Intensity Resolution ## Author: Dr. Reham Amin # Spatial and Intensity Resolution ## Author: Dr. Reham Amin # False contouring problem * Measuring discernible change in intensity levels is a highly subjective process reducing the number of bits R while repairing the spatial resolution constant creates the problem of false contouring. * It is caused by the use of an insufficient number of gray levels on the smooth areas of the digital image. It is called so because the rides resemble top graphics contours in a map. * It is generally quite visible in image displayed using 16 or less uniformly spaced gray levels. ## Author: Dr. Reham Amin # False contouring problem ### FIGURE 2.24: (a) 774 x 640, 256-level image. (b)-(d) Image displayed in 128, 64, and 32 intensity levels, while keeping the spatial resolution constant. (Original image courtesy of the Dr. David R. Pickens, Department of Radiology & Radiological Sciences, Vanderbilt University Medical Center.) ## Author: Dr. Reham Amin # False contouring problem ### FIGURE 2.24: (Continued) (c)-(h) Image displayed in 16, 8, 4, and 2 intensity levels. ## Author: Dr. Reham Amin # Image Interpolation (zooming and shrinking) * Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections. * image shrinking and zooming, are image resampling methods. * Interpolation is the process of using known data to estimate values at unknown locations. * Suppose that an image of size 500 x 500 pixels has to be enlarged 1.5 times to 750 x 750 pixels. * Zooming needs to create an imaginary 750 x 750 grid with the same pixel spacing as the original, and then shrink it so that it fits exactly over the original image. ## Author: Dr. Reham Amin # Image Interpolation * To perform intensity-level assignment for any point in the overlay, we look for its closest pixel in the original image and assign the intensity of that pixel to the new pixel in the 750 x 750 grid. * After assigning intensities to all the points in the overlay grid, we expand it to the original specified size to obtain the zoomed image. * This method is called as nearest neighbor interpolation because it assigns to each new location the intensity of its nearest neighbor in the original Image. * This approach is simple but, it has the tendency to produce undesirable artifacts, such as severe distortion of straight edges. * For this reason, it is used infrequently in practice. ## Author: Dr. Reham Amin # Image Interpolation ## Author: Dr. Reham Amin ### FIGURE: Original Image (3692 x 2812 pixels), Image reduced to 72 dpi and zoomed back to its original size, using nearest neighbor interpolation, Image shrunk and zoomed using bilinear interpolation, Image shrunk and zoomed using bicubic interpolation * It is possible to use more neighbors in interpolation, and there are more complex techniques, such as using splines and wavelets, that in some instances can yield better results. # Relationship between pixels * Let's consider several important relationships between pixels in a digital image. * Neighbors of a pixel * A pixel p at coordinates (x,y) has four horizontal and vertical neighbors whose coordinates are given by: ``` (x-1, y) (x, y-1) P(x,y) (x, y+1) (x+1, y) ``` * This set of pixels, called the 4-neighbors or p, is denoted by N₄(P) * Each pixel is one unit distance from (x,y) * Some of the neighbors of p lie outside the digital image if (x,y) is on the border of the image. ## Author: Dr. Reham Amin # RELATIONSHIP BETWEEN PIXELS * The four diagonal neighbors of p have coordinates and are denoted by ND(P). ``` (x-1, y+1) P(x,y) (x-1, y-1) (x+1, y-1) (x+1, y+1) ``` * These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted by Ng(P). ``` (x-1, y+1) (x, y-1) (x+1, y-1) ``` * As before, some of the points in ND (p) and N8 (p) fall outside the image if (x,y) is on the border of the image. ## Author: Dr. Reham Amin # Adjacency * Let V be the set of intensity values used to define adjacency. * In a binary image, V = {1} if we are referring to adjacency of pixels with value 1. * In a gray-scale image, the idea is the same, but set V typically contains more elements. * For example, * In the adjacency of pixels with a range of possible intensity values 0 to 255, set V could be any subset of these 256 values. ## Author: Dr. Reham Amin # Adjacency * We consider three types of adjacency: * 4-adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set N₂(p). * 8-adjacency: Two pixels p and q with values from V are 8-adjacent if q is in the set Na(p). * m-adjacency (mixed adjacency): Two pixels p and q with values from V are m-adjacent if * q is in N₄(p), or * q is in No(p) and the set N₄(p) n N₄(q) has no pixels whose values are from V. ## Author: Dr. Reham Amin # Adjacency example * Consider the image segment shown below: * Let v={1}, Show the 4 adjacent ``` 0 1 1 1 0 1 0 0 0 1 ``` ## Author: Dr. Reham Amin # Adjacency example * Consider the image segment shown below: * Let v={1}, Show the 8 adjacent ``` 0 1 1 1 0 1 0 0 0 0 1 ``` ## Author: Dr. Reham Amin # Adjacency example * Consider the image segment shown below: * Let v={1}, Show the m adjacent ``` 0 1 1 0 1 0 1 0 0 0 1 ``` ## Author: Dr. Reham Amin # Adjacency * Consider the pixel arrangement shown in figure for V = {1} * The three pixels at the top of figure show multiple (ambiguous) 8-adjacency, as indicated by the dashed lines. * Above ambiguity is removed by using m-adjacency, as shown in figure. ## Author: Dr. Reham Amin # Adjacency example * Consider the image segment shown below: * Let v={1}, Find the 8 -path from (1,3) to (3,3) ``` 0 1 1 1 0 1 0 0 0 0 1 ``` ## Author: Dr. Reham Amin # Adjacency example * Consider the image segment shown below: * Let v={1}, Find the m -path from (1,3) to (3,3) ``` 0 1 1 0 1 0 1 0 0 0 1 ``` ## Author: Dr. Reham Amin # Adjacency * (X0, yo), (X1, У1), ..., (Xn, yn) * For every pixel p in S, the set of pixels in S that are connected to p is called a connected component of S. * If S has only one connected component, then S is called Connected Set. * We call R a region of the image if R is a connected set. * Two regions, R₁ and Rj are said to be adjacent if their union forms a connected set. * Regions that are not to be adjacent are said to be disjoint. ## Author: Dr. Reham Amin # Adjacency example * Consider the image segment shown below: * Let V={1}, are the 2 regions are adjacent? ``` 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 R1 R2 ``` ## Author: Dr. Reham Amin # Adjacency example * Consider the image segment shown below: * Let V={1}, are the 2 regions are adjacent? ``` 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 R1 R2 ``` ## Author: Dr. Reham Amin # Adjacent regions ``` 1 1 1 1 0 1 0 1 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 0 0 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 ``` * The two regions (of 1s) in figure are adjacent only if 8-adjacency is used * (according to the earlier definition, a 4-path between the two regions does not exist, so their union is not a connected set). * The circled point is part of the boundary of the 1-valued pixels only if 8-adjacency between the region and background is used. * The inner boundary of the 1-valued region does not form a closed path, but its outer boundary does. ## Author: Dr. Reham Amin # Distance metric * For pixels p, q, and s, with coordinates (x,y,), (u,v), and (w,z), respectively, D is a distance function or metric * The Euclidean distance between p and q is defined as * $D(p, q) = [(x-s)2 + (y - t)2 ]1/2$ * The distance (called the city-block distance) between p and q is defined as * $D4(p, q) = |x-s|+|yt|$ * The distance (called the chessboard distance) between p and q is defined as * $D(p, q) = max(|xs|,|yt|)$ ## Author: Dr. Reham Amin # Thank you ## With best wishes ## Dr. Reham Amin