Edge Detection PDF
Document Details
Uploaded by ModernNeptunium
Nanyang Polytechnic
Tags
Summary
This document provides a comprehensive overview of edge detection techniques, including Prewitt, Sobel, Laplacian, and Canny methods. It details the theoretical concepts, practical implementation, and use of these algorithms with Python and Open CV. The information presented is useful for understanding edge detection in computer vision.
Full Transcript
Edge Detection Official (Open) Chapter 4 Edge detection Practical on different edge detection with examples using python Official (Open) What is Edge Detection? a technique of image processing used to iden...
Edge Detection Official (Open) Chapter 4 Edge detection Practical on different edge detection with examples using python Official (Open) What is Edge Detection? a technique of image processing used to identify points in a digital image with discontinuities, simply to say, sharp changes in the image brightness these points where the image brightness varies sharply are called the edges (or boundaries) of the image basic steps in image processing, pattern recognition in images and computer vision convolution operation is the most common operation used in edge detection Official (Open) Edge Detection Concepts Edge models are theoretical constructs used to describe and understand the different types of edges that can occur in an image. These models help in developing algorithms for edge detection by categorizing the types of intensity changes that signify edges. The basic edge models are Step, Ramp and Roof. A step edge represents an abrupt change in intensity, where the image intensity transitions from one value to another in a single step. A ramp edge describes a gradual transition in intensity over a certain distance, rather than an abrupt change. A roof edge represents a peak or ridge in the intensity profile, where the intensity increases to a maximum and then decreases. From left to right, models (ideal representations) of a step, a ramp, and a roof edge, and their corresponding intensity profiles. (Source: Digital Image Processing by R. C. Gonzalez & R. E. Woods) Official (Open) Edge Detection Concepts Image Intensity Function The image intensity function represents the brightness or intensity of each pixel in a grayscale image. In a color image, the intensity function can be extended to include multiple channels (e.g., red, green, blue in RGB images). A sharp variation of the intensity function across a portion of 2D grayscale image (Source: Digital Image Processing by R. C. Gonzalez & R. E. Woods) Official (Open) Edge Detection Concepts First and Second Derivative The first derivative of an image measures the rate of change of pixel intensity. It is useful for detecting edges because edges are locations in the image where the intensity changes rapidly. It detects edges by identifying significant changes in intensity. The first derivative can be approximated using gradient operators like the Sobel, Prewitt, or Scharr operators. The second derivative measures the rate of change of the first derivative. It is useful for detecting edges because zero-crossings (points where the second derivative changes sign) often correspond to edges. It detects edges by identifying zero-crossings in the rate of change of intensity. The second derivative can be approximated using the Laplacian operator. Note: An image representing the 2 derivative will be shown in the next slide Official (Open) Edge Detection Concepts A simple analogy: First derivative is the change in pixel intensity while second derivative is the rate of change in the pixel intensity Official (Open) How is Detection carried out? Through detecting a sharp changes in image brightness and are likely to correspond to: 1. discontinuities in depth, 2. discontinuities in surface orientation, 3. changes in material properties and 4. variations in scene illumination Ideally case, the result of applying an edge detector to an image may lead to a set of connected curves that indicate the boundaries of objects. However, this is not always the case. Official (Open) Simple Analogy of Edge in an Image Step discontinuities occur when the picture intensity quickly shifts from one value on one side of the discontinuity to a different value on the opposite side of the discontinuity. Line discontinuities occur when the visual intensity quickly changes but returns to its initial value after a short distance. Step Ramp Line Roof One-dimensional Edge Profile Official (Open) Why it is a non-trivial task? Assuming the following, a 1-D pixel representation of an image segment: An edge? What is the threshold? One of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled Official (Open) Methods of Edge Detection 1. Prewitt edge detection 2. Sobel edge detection 3. Laplacian edge detection 4. Canny edge detection Official (Open) Edge Detection Approaches Common Approaches are as below: 1. Prewitt Edge Detection 2. Sobel Edge Detection 3. Laplacian Edge Detection 4. Canny Edge Detection Official (Open) Prewitt Edge Detection Prewitt edge detection is a technique used for detecting edges in digital images. It works by computing the gradient magnitude of the image intensity using convolution with Prewitt kernels. The gradients are then used to identify significant changes in intensity, which typically correspond to edges. Prewitt edge detection uses two kernels, one for detecting edges in the horizontal direction and the other for the vertical direction. These kernels are applied to the image using convolution. Horizontal Prewitt Kernel (Gx): Vertical Prewitt Kernel (Gy): Official (Open) Prewitt Edge Detection Both the first and second derivative masks follow these three properties: More weight means more edge detection. The opposite sign should be present in the mask. (+ and -) The Sum of the mask values must be equal to zero. Prewitt operator provides us two masks one for detecting edges in the horizontal direction and another for detecting edges in a vertical direction. Official (Open) Prewitt Edge Detection Original Image Vertical Mask Horizontal Mask Official (Open) Code The following is just an example of how you could implement Prewitt edge detection: First you create a Prewitt function and convert the input image to gray scale: def prewitt_edge_detection(image): # Convert the image to grayscale gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) Can anyone tell me why? Hint: Channel, can you do filtering over 3 channel? Official (Open) Code Creating the Prewitt kernel: Note: This is the pre-defined kernel for prewitt # horizontal Prewitt kernel kernel_x = np.array([[-1, -1, -1], [0, 0, 0], [1, 1, 1]]) # vertical Prewitt kernel kernel_y = np.array([[-1, 0, 1], [-1, 0, 1], [-1, 0, 1]]) Official (Open) Code Apply both kernel to the image using the cv2.filter2D function: horizontal_edges = cv2.filter2D(gray_image, -1, kernel_x) vertical_edges = cv2.filter2D(gray_image, -1, kernel_y) Syntax: filter2D (src, dst, depth, kernel) Note: Basically, it is doing a convolve using the kernel over the image. More information could be found below: https://docs.opencv.org/4.x/d4/d86/group__imgproc__filter.html Official (Open) Code Ensure both arrays have the same data type and compute the gradient magnitude using the following function: horizontal_edges = np.float32(horizontal_edges) vertical_edges = np.float32(vertical_edges) gradient_magnitude = cv2.magnitude(horizontal_edges, vertical_edges) If you wish to highlight the edges, you could apply CV2 thresholding function: threshold = 50 _, edges = cv2.threshold(gradient_magnitude, threshold, 255, cv2.THRESH_BINARY) Note: cv2.THRESH_BINARY: If pixel intensity is greater than the set threshold, value set to 255, else set to 0 (black). Lastly, return the edge information: return edges Official (Open) Sobel edge detection Sobel edge detection is a popular technique used in image processing and computer vision for detecting edges in an image. It is a gradient-based method that uses convolution operations with specific kernels to calculate the gradient magnitude and direction at each pixel in the image. Here's a detailed explanation of Sobel edge detection. The Sobel operator uses two 3x3 convolution kernels (filters), one for detecting changes in the x- direction (horizontal edges) and one for detecting changes in the y-direction (vertical edges). These kernels are used to compute the gradient of the image intensity at each point, which helps in detecting the edges. Official (Open) Sobel kernels Horizontal Kernel (𝐺𝑥): This kernel is used to detect horizontal edges by emphasizing the gradient in the x-direction. The 𝐺𝑥 kernel emphasizes changes in intensity in the horizontal direction. The positive values (+1 and +2) on the right side will highlight bright areas, while the negative values (-1 and -2) on the left side will highlight dark areas, effectively detecting vertical edges. Official (Open) Sobel kernels Vertical Kernel (𝐺𝑦): This kernel is used to detect vertical edges by emphasizing the gradient in the y-direction. The 𝐺𝑦 kernel emphasizes changes in intensity in the vertical direction. Similarly, the positive values (+1 and +2) at the bottom will highlight bright areas, while the negative values (-1 and -2) at the top will highlight dark areas, effectively detecting horizontal edges. Official (Open) Step-by-Step to implement Sobel Edge Detection: Let's walk through an example of Sobel edge detection using Python and the OpenCV library. Here’s the Step-by- Step Example: 1. Load and Display the Image: First, we need to load a sample image and display it to understand what we're working with. 2. Convert to Grayscale: Convert the image to grayscale as the Sobel operator works on single-channel images. 3. Apply Gaussian Smoothing (Optional): Apply a Gaussian blur to reduce noise and make edge detection more robust. 4. Apply Sobel Operator: Use the Sobel operator to calculate the gradients in the x and y directions. 5. Calculate Gradient Magnitude: Compute the gradient magnitude from the gradients in the x and y directions. A threshold is applied to the gradient magnitude image to classify pixels as edges or non-edges. Pixels with gradient magnitude above the threshold are considered edges. 6. Normalization: The gradient magnitude and individual gradients are normalized to the range 0-255 for better visualization. 7. Display the Resulting Edge Image: Normalize and display the edge-detected image. Official (Open) Code If you want to detect both edges, better option is to keep the output datatype to Step 4: some higher forms, like cv.CV_16S, cv.CV_64F etc, take its absolute value and then convert back to cv.CV_8U. Sobel operators Gx = cv2.Sobel(blurred_image, cv2.CV_64F, 1, 0, ksize=3) Gy = cv2.Sobel(blurred_image, cv2.CV_64F, 0, 1, ksize=3) Syntax: Sobel(src, dst, ddepth, dx, dy) ◦ src − An object of the class Mat representing the source (input) image. ◦ dst − An object of the class Mat representing the destination (output) image. ◦ ddepth − An integer variable representing the depth of the image (-1) ◦ dx − An integer variable representing the x-derivative. (0 or 1) ◦ dy − An integer variable representing the y-derivative. (0 or 1) More information can be found here: https://docs.opencv.org/3.4/d2/d2c/tutorial_sobel_derivatives.html Official (Open) Example of using Sobel Edge Detection Official (Open) Laplacian Edge Detection Laplacian Edge Detection is a technique in image processing used to highlight areas of rapid intensity change, which are often associated with edges in an image. Unlike gradient-based methods such as Sobel and Canny, which use directional gradients, Laplacian Edge Detection relies on the second derivative of the image intensity. Following are the key Concepts of Laplacian Edge Detection: The Laplacian operator is used to detect edges by calculating the second derivative of the image intensity. Mathematically, the second derivative of an image 𝑓(𝑥, 𝑦) can be represented as: This can be implemented using convolution with a Laplacian kernel. Common 3x3 kernels for the Laplacian operator include: Official (Open) Laplacian edge filter Unlike the Sobel edge detector, the Laplacian edge detector uses only one kernel. It calculates second order derivatives in a single pass. Official (Open) Laplacian Edge Detection cv.Laplacian() is a function provided by the OpenCV library used for performing Laplacian edge detection on images. This function applies the Laplacian operator to the input image to compute the second derivative of the image intensity. Following are the steps for Edge Detection Using Laplacian Convert the Image to Grayscale: Edge detection usually starts with a grayscale image to simplify computations. Apply Gaussian Blur (Optional): Smoothing the image with a Gaussian blur can reduce noise and prevent false edge detection. Apply the Laplacian Operator: Convolve the image with a Laplacian kernel to calculate the second derivative. Official (Open) Step to implement Laplacian Edge Detection 1. Load the image. 2. Remove the noise by applying the Gaussian Blur. 3. Convert the image into grayscale. 4. Apply Laplacian Filter. 5. See the output. Official (Open) Code Syntax: Laplacian(src, dst, ddepth) This method accepts the following parameters − 1. src − A Mat object representing the source (input image) for this operation. 2. dst − A Mat object representing the destination (output image) for this operation. 3. ddepth − A variable of the type integer representing depth of the destination image. Example: laplacian = cv2.Laplacian(blurred_image, cv2.CV_64F) Official (Open) Laplacian edge filter Official (Open) Canny edge detection It is a multistage process that helps to identify the edges in an image by reducing noise and preserving important edge features Official (Open) Canny edge detection Operation steps: 1. Grayscale conversion 2. Noise reduction 3. Gradient calculation 4. Non-maximum suppression 5. Double thresholding 6. Edge tracking by hysteresis Details for step 2 to 6 will be provided in the following slides Official (Open) Canny Edge Detection (Step 2) Following are the steps of steps of Canny Edge Detection: Noise Reduction using Gaussian Blurring: The first step in the Canny edge detection algorithm is to smooth the image using a Gaussian filter. This helps in reducing noise and unwanted details in the image. The Gaussian filter is applied to the image to convolve it with a Gaussian kernel. The Gaussian kernel (or Gaussian function) is defined as: This step helps to remove high-frequency noise, which can cause spurious edge detection. Official (Open) Code: You could use the function cv2.GaussianBlur to reduce noise: GaussianBlur(src, dst, ksize, sigmaX) ▪ src − A Mat object representing the source (input image) for this operation. ▪ dst − A Mat object representing the destination (output image) for this operation. ▪ ksize − A Size object representing the size of the kernel. ▪ sigmaX − A variable of the type double representing the Gaussian kernel standard deviation in X direction. Example: ◦ blurred_image = cv2.GaussianBlur(gray_image, (5, 5), 1.4) Official (Open) Canny Edge Detection (Step 3) Gradient Calculation: After noise reduction, the Sobel operator is used to calculate the gradient intensity and direction of the image. This involves calculating the intensity gradients in the x and y directions (𝐺𝑥 and 𝐺𝑦). The gradient magnitude and direction are then computed using these gradients. Official (Open) Canny Edge Detection (Step 4) Non-Maximum Suppression: To thin out the edges and get rid of spurious responses to edge detection, non-maximum suppression is applied. This step retains only the local maxima in the gradient direction. The idea is to traverse the gradient image and suppress any pixel value that is not considered to be an edge, i.e., any pixel that is not a local maximum along the gradient direction. In the above image, point A is located on the edge in the vertical direction. The gradient direction is perpendicular to the edge. Points B and C lie along the gradient direction. Therefore, Point A is compared with Points B and C to determine if it represents a local maximum. If it does, Point A proceeds to the next stage; otherwise, it is suppressed and set to zero. Official (Open) Canny Edge Detection (Step 5) Double Thresholding: After non-maximum suppression, the edge pixels are marked using double thresholding. This step classifies the edges into strong, weak, and non-edges based on two thresholds: high and low. Strong edges are those pixels with gradient values above the high threshold, while weak edges are those with gradient values between the low and high thresholds. Given the gradient magnitude 𝑀 and two thresholds 𝑇high and 𝑇low, the classification can be mathematically expressed as: Strong Edges: Weak Edges: Non-Edges: Official (Open) Canny Edge Detection (Step 6) Edge Tracking by Hysteresis: The final step is edge tracking by hysteresis, which involves traversing the image to determine which weak edges are connected to strong edges. Only the weak edges connected to strong edges are retained, as they are considered true edges. This step ensures that noise and small variations are ignored, resulting in cleaner edge detection. Official (Open) Code Step 3-6 are all in-built into cv2.canny Syntax: cv2.Canny(image, T_lower, T_upper, aperture_size, L2Gradient) Where: ▪ Image: Input image to which Canny filter will be applied ▪ T_lower: Lower threshold value in Hysteresis Thresholding ▪ T_upper: Upper threshold value in Hysteresis Thresholding ▪ aperture_size: Aperture size of the Sobel filter. ▪ L2Gradient: Boolean parameter used for more precision in calculating Edge Gradient. Example: edges = cv2.Canny(blurred_image, 100, 200) Official (Open) Canny edge detection Official (Open) Advantages of Canny edge detection The Canny edge detection algorithm offers several advantages over other edge detection techniques: 1. Accurate edge localization: Canny edge detection provides precise localization of edges. The non- maximum suppression step ensures that only the most significant edges are retained. 2. Low error rate: By using double thresholding and edge tracking by hysteresis, the Canny algorithm reduces the likelihood of false edges and thus has a low error rate. 3. Single response to edges: Each edge in the image is only represented by a single response in the output, avoiding duplicate edge detections. 4. Robust to noise: Canny edge detection is suitable for real-world images affected by various levels of noise. The Gaussian smoothing in the initial steps makes the Canny algorithm robust to noise. Official (Open) The end