LSGI536 Remote Sensing Image Processing Lecture 4 PDF
Document Details
Uploaded by GratifyingMaclaurin
The Hong Kong Polytechnic University
Dr. Zhiwei Li
Tags
Related
- LSGI536 Remote Sensing Image Processing PDF
- Remote Sensing Image Pre-processing Lecture 3 PDF
- Lecture 5 Remote Sensing Image Interpretation PDF
- Lecture 6: Machine Learning for Remote Sensing Image Processing - Part I PDF
- Digital Image Processing Course Study Guide PDF
- Remote Sensing Part 3 - Image And Resolution PDF
Summary
This document is a lecture on remote sensing image processing focusing on image enhancement techniques. It covers contrast enhancement, contrast stretching, Gaussian stretch, spatial filtering and other topics.
Full Transcript
LSGI536 Remote Sensing Image Processing Lecture 4 Remote Sensing Image Enhancement Dr. Zhiwei Li Research Assistant Professor Department of Land Surveying and Geo-Informatics The Hong Kong Polytechnic University Email: [email protected] Outline 1. Contrast Enhancement What is it and why? 2. Con...
LSGI536 Remote Sensing Image Processing Lecture 4 Remote Sensing Image Enhancement Dr. Zhiwei Li Research Assistant Professor Department of Land Surveying and Geo-Informatics The Hong Kong Polytechnic University Email: [email protected] Outline 1. Contrast Enhancement What is it and why? 2. Contrast Stretch Linear stretch Histogram Equalization Histogram Matching Gaussian stretch 3. Spatial filtering Low pass filters – mean filter, median filter, majority filter, etc. High pass filters – edge enhancement, directional first differencing 4. Density slicing and Image Arithmetic Density slicing Band Ratioing – Simple ratios Normalized difference indices 2 Section 1 Contrast Enhancement 3 Contrast enhancement: definition Technology for increasing the visual distinction between features in a scene Done by spectral feature manipulation Producing the ‘best’ image for a particular application Applied to image data after the appropriate preprocessing Noise removal must be done before contrast enhancement. Because without noise removal, image interpreter is left with the prospect of analyzing enhanced noise 4 Why is contrast enhancement necessary? Display images and record device operations in a range of 0-255 grey levels Sensor data in a single scene rarely extend over this whole range Thus necessary to expand the narrow range of brightness levels in any one scene to take advantage of the whole range of 0-255 display levels available This maximizes the contrast between features In short, contrast enhancement changes the image value distribution to cover a wide range 6 Why is contrast enhancement necessary? For a Low contrast image, image values concentrated near a narrow range (mostly dark, or mostly bright, or mostly medium values) Contrast of a image can be revealed by histogram 7 Generic appearance of histogram 8 Graphical representation of image histogram SPOT XS Band 3 (NIR) Image histogram 9 Intensity transformation Contrast enhancement can be realized by a grey-level (intensity) transformation function of the form s = T(r) r denotes the grey level of point (x, y) of an input image s denotes the grey level of point (x, y) of a processed image Since the enhancement is applied at any given point in an image, the technique is referred to as point processing Darkening the levels below m, and brightening the levels above m This is a threshold function. When r=m, s=1 (white) 10 Types of contrast stretch Principle of contrast stretch enhancement 18 Section 2 Contrast Stretch Linear Contrast Stretch 19 Linear contrast stretch Translates the image pixel values from the observed range (Dmin to Dmax), scaling Dmin to 0 and Dmax to 255 Intermediate values retain their relative positions so that e.g. median input pixel maps to 127 Algorithm for linear stretch For example, original range is 50-100 DN’ = (𝐷𝑁−50) 50 × 255, where DN ∈ [50,100] 20 Transformation function for Linear Contrast Stretch 21 Images and plots of histogram for Linear Contrast Stretch 22 Images and plots of histogram for Linear Contrast Stretch 24 Disadvantage of Linear Contrast Stretch It may assign a large range of display values for a small amount of input values. e.g. refer to below image histogram DN values 60-108 represent few pixels in an image but linear stretch allocates DN output values of 0-127 (half the output range) Most of pixels in image confined to only half output range 25 Section 2 Contrast Stretch Histogram Equalization Stretch 26 Histogram Equalization stretch Image values are assigned to display levels based on frequency of occurrence in the image Image range 109-158 now stretched over a large portion of display levels (39-255) Smaller portion (0-38) reserved for infrequently occurring values of 60-108 In general, the goal is to transform gray-level value distribution of input image to uniform distribution. Probability Density Function (PDF) Arbitrary PDF Uniform PDF 27 Histogram Equalization stretch However Relative brightness in the original image is not maintained. The number of levels used is reduced. Result may be unsatisfactory compared with linear stretch if few occupied LUT (lookup table) entries in output stretch 28 Histogram Equalization Consider a continuous transformation function, and let variable r represent the gray levels of an input image, r is normalized to the interval [0,1] such that r=0 when black, and r=1 when white. Consider the following form of the continuous transformation function for 0 ≤ r ≤ 1. 𝑠 = 𝑇(𝑟) [𝑒𝑞.1] The goal is to obtain a uniformed histogram in the result, i.e., h(i) = constant for pixel value i. 29 Histogram Equalization 𝑠 = 𝑇(𝑟) produces grey level s for every value r in the input image and satisfies the following two conditions: 1) T(r) is single-valued and increasing in the interval 0 ≤ r ≤ 1; and 2) 0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1 Condition 1 guarantees inverse transformation is possible and the increasing order from black to white in the output image, i.e. T(r) i.e. 𝑟 = 𝑇−1(𝑠) for 0≤𝑠≤1 Condition 2 guarantees output grey levels will be the same range as the input levels. 30 Histogram Equalization for Continuous Random Variables Consider the gray levels in the image as continuous Random Variables (RV) s and r. Let pr(r) and ps(s) be a probability density function (PDF) of the RVs r and s, respectively. Since T(r) satisfies the conditions, we can formulate the following formula: Transformation from varying probability to equal probability 31 Histogram Equalization for Discrete Values For discrete values, let n be the total number of pixels in the image, nk is the number of pixels having gray level rk, L is number of gray levels. The probability of gray level rk in an image can be estimated as follows The discrete version of the transformation function (also a CDF - cumulative distribution function) is The transformation given is called histogram equalization or histogram linearization. 32 Histogram Equalization 33 https://en.wikipedia.org/wiki/Histogram_equalization Histogram Equalization Implementation procedure Given an input image of size M × N with Level L, e.g., a 8x8 pixel image with 8-bit level Generate list of image value and count Compute Cumulative Number for each value Compute new value h(νk) for each image value in the input image using the following equation: 𝑤ℎ𝑒𝑟𝑒 𝑘=0,1, … 𝐿−1, 𝑛=𝑀х𝑁, 𝑣𝑖 is the number of pixel shaving level = i 34 Histogram Equalization Consider an example L = 16 35 Histogram Equalization - Summary The goal is to produce output image that has a uniform histogram. However, discrete transformation cannot produce uniform histogram. It can produce a histogram-equalized image that has a full range of the grayscale. Results are predictable and simple to implement. However, there are situations that attempting to base enhancement on uniform histogram is not the best approach. For example, when the input image having high concentration of pixels very near 0, the net effect is to map a very narrow interval of dark pixels into the upper end of the output image resulting in a light, washed-out appearance. 36 Section 2 Contrast Stretch Histogram Matching 37 Histogram Matching Assume we have two images and each has its specific histogram. So we want to answer this question before going further: Is it possible to modify one image based on the contrast of the other one? And the answer is YES. In fact, this is the definition of histogram matching. In other words, given images A and B, it is possible to modify the contrast level of A according to B. 38 Histogram Matching Histogram matching is useful when we want to unify the contrast level of a group of images. In fact, Histogram equalization can also be taken as histogram matching, since we modify the histogram of an input image to be similar to the normal distribution. In order to match the histogram of images A and B: 1. we need to first equalize the histogram of both images. 2. we need to map each pixel of A to B using the equalized histograms. 3. we modify each pixel of A based on B. 39 Histogram Matching 40 Histogram Matching 41 Histogram Matching This method (that allows specifying the shape of the histogram of the image that we want to process) is called histogram matching or histogram specification. Consider pr(w) is the PDF from the input image, and pz(t) is the PDF that we wish the output image to have 42 Histogram Matching The goal is to replace rk with zk. 43 Histogram Matching Implementation 1. 2. 3. 4. Obtain the histogram of input image Perform histogram equalization on the input image, i.e. compute sk for each rk Obtain transformation function G Compute zk for each value of sk such that Then, for each pixel in the original image, if the value of that pixel is rk, map this value to its corresponding level sk; then map level sk into the final level zk. 44 Histogram Matching – Example 45 Histogram Matching – Example 46 Comparison between histogram equalization and histogram matching If there is large concentration of pixels in the input histogram having levels very near 0. Uniform histogram would only map a very narrow interval of dark pixels into the upper end of the grey scale of the output image. (See the starting grey level is over 128) 47 Comparison between histogram equalization and histogram matching Curve 1: based on histogram equalization Curve 2: based on histogram matching 48 Section 2 Contrast Stretch Gaussian Stretch 49 Normal (Gaussian) distribution 1.0 = 0, = 0, = 0, = , 0. = 0. , = 1.0, =.0, = 0. , 0. 0. 0. 0.0 1 0 1 μ is the mean or expectation of the distribution (and also its median and mode) σ is its standard deviation σ2 is the variance of the distribution https://www.mathsisfun.com/data/standard-normal-distribution-table.html https://www.intmath.com/counting-probability/normal-distribution-graph-interactive.php https://en.wikipedia.org/wiki/Normal_distribution 50 The Gaussian stretch procedure i. ii. iii. Original pixel value Target distribution (z value) Probability of each class (target distribution) iv. Target number of pixel of each class, e.g. probability × total number of pixel v. Cumulative the target number of pixel vi. Observed number of pixels from the input image vii. Cumulative observed number of pixels of the input image viii. New pixel value Example: Original 0 class, Cumulative observed number of pixels is 1311, then find the closest class by minimum difference, i.e., 1311-530=781, 1311-1398=-87, therefore, the new class is class 1 Why? 51 Gaussian stretch Fits observed histogram to a normal distribution (Gaussian) form Normal distribution gives probability of observing a value if Mean and Standard Deviation are known, e.g., assume no. of pixels = 262144 no. of quantization levels = 16 Then target no. of pixels in each class for normal distribution (column iv) = probability (column iii) × 262144 Then cumulative no. of pixels at each level is calculated Output pixel value determined by comparing v & vii Once value of column viii determined, written to LUT 52 Result of Gaussian stretch 53 Raw data SPOT XS Band 3 (NIR) – output limits set to input limits 54 Linear stretch of actual data range 55 Result of Histogram Equalization 56 Result of Gaussian transform 57 Questions What is the pre-requisite of image enhancement? List reasons to perform contrast enhancement What are the advantages and disadvantages of histogram equalization? Compare histogram equalization and histogram matching Explain the procedure to perform histogram equalization for discrete values 62 Section 3 Spatial Filtering 63 Image filtering Spatial Filters emphasize or de-emphasize image data of various spatial frequencies Spatial frequency refers to “roughness” of brightness variations or changes of pixel value in an image Degree of change in tonal variation Rapid change → high spatial frequency → rough Slow change → low spatial frequency → smooth Areas of high spatial frequency are tonally rough Grey levels change abruptly over small distances or a small number of pixels. e.g. across roads or field borders Smooth areas have low spatial frequency e.g. large fields or water bodies Grey levels vary only gradually over a relatively large number of pixels. e.g. large agricultural fields or water bodies. 64 Image filtering Low pass filters: Smoothing emphasize low-frequency changes in brightness and de-emphasize high-frequency local detail smoothing filter (mean, median) noise removal filter, noise is usually scattered and different to surrounding→high frequency High pass filters: Sharpening emphasize high-frequency components of images and de-emphasize more general, low-frequency detail edge enhancement filter directional first differencing filter Feature (edge) detection Edges tend to be high frequency Can specify enhancement to look for specific edges 65 Image filtering Original Spot Pan 66 Image filtering Smooth 67 Image filtering Sharpen 68 Image filtering Find Edges 69 Image filtering 70 Image filtering - convolution The image processing operation applying spatial filtering is called convolution. Convolution involves two inputs An image A moving window, also called kernel or convolution matrix. Pixels modified on basis of grey level of neighbouring pixels in 3 stages: Input image Moving window (kernel) Output image Convolution (Padding, no strides) 71 Kernel Kernel is a square matrix (moving window) which is moved pixel-by-pixel over the input image Consider a m x n matrix kernel. If m < n or m > n, central element of the filter should locate at the intersection of the central row and column of the filter window When m=n, the filter window has an odd numbered array of elements (3x3, 5x5, etc.) those elements represent a weight to be applied to each corresponding digital number of the input image: result is summarized for central pixel e.g. mean Convolution Kernels 72 Convolution Input Image 1st convolution operation Kernel 9th convolution operation 73 Padding image borders 2 6 5 4 2 6 5 4 8 9 8 6 8 9 8 6 7 7 8 7 7 7 8 7 6 8 7 6 6 8 7 6 Replication padding Zero padding 2 2 6 5 4 4 2 2 6 5 4 4 8 8 9 8 6 6 7 7 7 8 7 7 6 6 8 7 6 6 6 6 8 7 6 6 0 0 0 0 0 0 0 2 6 5 4 0 0 8 9 8 6 0 0 7 7 8 7 0 0 6 8 7 6 0 0 0 0 0 0 0 74 Convolution Output Kernel Input 75 Convolution Convolution kernel can be used for blurring, sharpening, embossing, edge detection, and more. 76 Section 3 Spatial Filtering Low pass filters 77 Low pass filters: Smoothing Mean filter 91 + + 1/91 1/ 91 + + 1/91 1/ 91 + + 1/92 1/ 1/ 91 + 1/91 + 1/91 = 1.1 Kernel Input image Output image 78 Low pass filters: Smoothing Unequal-weighted smoothing filter 0.25 0.50 0.25 1 1 1 0.50 0.50 1 2 1 0.25 0.50 0.25 1 1 1 1 79 Low pass 3×3 Mean filter (Moving average filter) The output value for the central image pixel covered by the kernel (k) is the value of the products of each of the surrounding input pixel values and their corresponding kernel weights (W): Other frequencies may be smoothed by altering size of kernel or weighting factors 80 Traverse of pixel values across raw image and after mean filter Before X-axis: Pixel Number Y-axis: Pixel value After 81 Effects of mean filter Reduce the overall variability of the image and lower its contrast. Pixels that have larger or smaller values than their neighbourhood average are respectively decreased or increased in value so that local detail is lost. It retrieves overall pattern of values that is of interest, rather than the details of local variation. 4 4 4 4 4 4 4 8 4 4 4 4 4 4 4 4 4 4 82 Image smoothing using kernel sizes 3x3, 5x5, 7x7 IKONOS panchromatic image of ShauKei Wan 3x3 5x5 7x7 83 Median smoothing filters Superior to mean filter, as median is an actual number in the dataset (kernel) Less sensitive to error or extreme values e.g., 3,1,2,8,5,3,9,4,27 are pixel values in a 3 x 3 kernel median=? mean=6.89 rounded to 7 but 7 does not present in original dataset Mean larger than six of the nine observed values because influenced by extreme value 27 (3 times higher than next highest value in dataset) Therefore, isolated extreme pixels, which may represent as noise or spike, can be removed by median Preserves edges better than mean, which blurs edges (see the figure in next slide) 84 Comparison of median and mean filters Both the median and the moving average filters remove high-frequency oscillations Median filter more successfully removes isolated spikes and better preserves edges, defined as pixels at which the gradient or slope of grey level value changes remarkably. 85 Majority smoothing filter In this case, as the kernel passes over the image, the central pixel is set to the majority value within the kernel. Used for identifying class in the data while mean and median are irrelevant for classification. Majority filter Removes misclassified “salt and pepper” pixels 87 Gaussian smoothing filter Gaussian filter (weights generated by Gaussian Function) 88 Smoothing filters in software 89 Stripe noise removal Sixteenth line banding noise (brighter or darker than the others) in LANDSAT Band 2 (green) of 3.3.96: Deep Bay, Hong Kong Landsat band 2: Deep Bay after 7×7 median filter 90 Line drop removal Dropped line removed by averaging pixels each side of the line using a 1-dimensional 3×1 vertical filter with threshold of 0 (i.e. detect values of 0 or No data) Question: How to design a filter to achieve above processing? 91 Section 3 Spatial Filtering High pass filters 92 High pass filters: Sharpening Edge enhancing filter Kernel Input image Output image 93 High pass filters: Sharpening High pass filters that sharpen edges -1 -1 -1 1 -2 1 -1 9 -1 -2 5 -2 -1 -1 -1 1 -2 1 94 High pass (sharpening) filters Emphasize high frequency component by exaggerating local contrast, highlighting fine detail or enhance detail that has been blurred. A sharpening filter seeks to emphasize changes. When the kernel is applied over a region of uniform values, then no changes on output value. It has maximum output when the centre pixel differs significantly from the surrounding pixels. 95 Before and after 3×3 high pass filter IKONOS Panchromatic, Mt. Butler 97 Image subtraction method for high pass filter Image Source: Kriti Bajpai et al., 2017, [link] 100 Directional first differencing Emphasizing edges in image data Determines the derivative of grey levels with respect to a given direction Compares each pixel to one of its neighbours Result can be positive or negative outside the byte range, so we need rescale the range Because pixel-to-pixel differences are often very small, contrast stretch must be applied. 101 Y-directional filter emphasising E-W trends LANDSAT band 2 (green): west of Guangzhou 102 Directional first differencing 0 0 0 9 8 0 0 0 0 0 0 9 8 0 0 0 0 0 0 9 8 0 0 0 0 0 0 9 8 0 0 0 0 0 0 9 8 0 0 0 0 0 0 9 8 0 0 0 103 Edge Enhancement Using Laplacian Convolution Kernel The Laplacian is a second derivative (as opposed to the gradient which is a first derivative) and is invariant to rotation, meaning that it is insensitive to the direction in which the discontinuities (point, line, and edges) run. 0 -1 0 -1 -1 -1 1 -2 1 -1 4 -1 -1 8 -1 -2 4 -2 0 -1 0 -1 -1 -1 1 -2 1 104 Edge enhancement Edge enhancement through directional first differencing: (a) original image; (b) vertical first difference; (c) horizontal first difference; (d) left diagonal first difference; (e) right diagonal first difference; (f ) Laplacian edge detector. 105 Section 4 Density Slicing and Image Arithmetic 106 Density slicing DNs along the x axis of an image histogram are divided into a series of analyst-specified intervals or “slices” DNs falling within a given interval in the input image are then displayed at a single DN in the output image. If six different slices are established, the output image contains only six different gray levels. Density slice for surface temperature visualization 108 Spectral index Indices may be used to enhance a particular single feature on image. Different indices can be used to enhance different earth surface features on image, like enhancement of vegetation, water, snow, building, etc. Band ratio Normalized Difference Vegetation Index (NDVI) Normalized Difference Water Index (NDWI) Normalized Difference Snow Index (NDSI) Normalized Difference Building Index (NDBI) … 110 Image Arithmetic: band ratioing Band Ratioing is a process of dividing pixel values in one image by corresponding pixel values in another image. Reasons to use Band Ratioing are To enhance the image by bringing out Earth’s surface cover types from the image To provide independence of variations in scene illumination 111 Image Arithmetic: band ratioing Why does band ratio can highlight subtle spectral changes? It is because band ratio can emphasize the differences between two bands (for different materials). e.g. vegetation is darker in the visible, but brighter in the NIR than soil, thus the ratio difference is greater than either band individually 112 Ratio images Band Ratios are commonly used as vegetation indices aimed at identifying greenness and biomass A ratio of NIR/Red is most common The number of ratios possible from n bands is n(n-1), thus for SPOT 5 (3 bands) = 6, Landsat ETM (6 bands) = 30 It can be used to generate false colour composites by combining 3 monochromatic ratios A band ratio using TM near-infrared band (band 4) divided by the visiblered band (band 3), which created a vegetation index. 113 Normalized difference vegetation index (NDVI) NDVI is used to quantify vegetation greenness and is useful in understanding vegetation density and assessing changes in plant health. NDVI is calculated as a ratio between the red (R) and near infrared (NIR) values in traditional fashion. NDVI is defined as NDVI takes on values between -1.0 and 1.0, in which values equal or less than zero mean non-vegetated area. Not an absolute value but sum and difference of bands Well correlate to biomass, leaf chlorophyll levels, leaf area index values and so on. 118 NDVI In Landsat 4-7, NDVI = (Band ? – Band ?) / (Band ? + Band ?). In Landsat 8/9, NDVI = (Band ? – Band ?) / (Band ? + Band ?). https://www.usgs.gov/media/images/landsat-surface-reflectance-and-normalized-difference-vegetation-index 119 NDVI In Landsat 4-7, NDVI = (Band 4 – Band 3) / (Band 4 + Band 3). In Landsat 8/9, NDVI = (Band 5 – Band 4) / (Band 5 + Band 4). https://www.usgs.gov/media/images/landsat-surface-reflectance-and-normalized-difference-vegetation-index 120 NDVI 121 China and India Lead the Way in Greening https://earthobservatory.nasa.gov/images/1445 40/china-and-india-lead-the-way-in-greening 122 Questions Write down the kernel and explain low pass filter and high pass filter Explain band ratioing 124 Homework – Histogram equalization and matching Please conduct histogram equalization and matching by yourself using the shared Excel documents on Blackboard. 125 Preliminary group project presentation The presentation date is March 6 in Lecture 5. 3 min presentation with PPT. Mainly introduce the topic, literature review, and research plan. Existing problems can also be listed for discussion. 126 End of Lecture 4 127