Lecture 1 and 2 PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document provides a lecture on image processing, covering topics including the human visual system, components of an image processing system, and applications in various fields like medicine, agriculture, and entertainment. It includes some drill problems related to image processing concepts.
Full Transcript
Human Visual System: consists of two parts: eye and brain. human eye: acts as a receptor of images by capturing light and converting it into signals. The human eye is analogous to a camera. These signals are then transmitted to the brain for further analysis. Eyes and brain work in combination t...
Human Visual System: consists of two parts: eye and brain. human eye: acts as a receptor of images by capturing light and converting it into signals. The human eye is analogous to a camera. These signals are then transmitted to the brain for further analysis. Eyes and brain work in combination to form a picture. Image acquisition: aim to obtain the digital image from the object. SUN Retinal Image from lens The basic lens equation is 𝟏 𝟏 𝟏 + = 𝒖 𝒗 𝒇 𝑢 → distance between object and lens 𝑣 → distance between lens and retinal image Note → the distance between lens and retina varies from 14 to 17 mm. 𝑓 → Focal length 𝒖𝑴 𝒇= 𝑴+𝟏 M is Magnification Factor. Magnification factor is defined as the ratio of size of image to the size of object. 𝑶 𝑰 = 𝒖 𝒗 𝑂 → Object size 𝐼 → Retina Image size Drill Problem: An Image is 2400 pixels wide and 2400 pixels high. The image was scanned at 300 dpi. What is the physical size of the image. Physical size of the image = No. of pixels in heights (i.e. resolution) × = No. of pixels in widths (i.e. resolution) 2400 2400 Physical size of the image = × = 8′ × 8′ 300 300 Components of an Image Processing System: set of devices for acquiring, storing, manipulating and transmitting digital images. The main components of an image processing system are (1) Sensing device, (2) Image processing elements, (3) Storage device and (4) Display device. Sensing devices (OR Image Sensor): are used to capture the image. The sensing device senses the energy radiated by the object and converts it into digital form. For example, a digital camera senses the light intensity and converts into the digital image form. Moreover, video camera and scanners use some types of sensors for capturing the image. Image processing elements are used to perform various operations on a digital image. It requires a combination of hardware and software. Storage is a very important part of an image processing system. The size of an image or video file is very large. For instance, an 8-bit image having 1024 x 1024 pixels requires 1 megabyte of storage space. Therefore, mass storage devices are required in image processing systems. Display devices: are required to display the images. These can be a computer monitor, mobile screen, projector for display or hardcopy devices, such as printers. A communication channel is also essential for sending and receiving images. Applications of Digital Image Processing: Digital image processing techniques are now used in a number of applications; some common applications are given below. In medicine: Several medical tools use image processing for various purposes, such as image enhancement, image compression, object recognition, etc. X-radiation (X- rays), computed tomography scan (CT scan), positron-emission tomography (PET), Single-photon emission computed tomography (SPECT), nuclear magnetic resonance (NMR) spectroscopy and Ultra-Sonography are some popular pieces of medical equipment based on image processing. In agriculture: Image processing plays a vital role in the field of agriculture. Various paramount tasks such as weed detection, food grading, harvest control and fruit picking are done automatically with the help of image processing. In weather forecasting: Image processing also plays a crucial role in weather forecasting, such as prediction of rainfall, hailstorms, flooding. Meteorological radars are widely used to detect rainfall clouds and, based on this information, systems predict immediate rainfall intensity. In photography and film: Retouched and spliced photos are extensively used in newspapers and magazines for the purpose of picture quality enhancement. In movies, many complex scenes are created with image and video editing tools which are based on image and video processing operations. Image processing-based methods are used to predict the success of upcoming movies. In entertainment and social media: Face detection and recognition are widely used in social networking sites where, as soon as a user uploads the photograph, the system automatically identifies and gives suggestion to tag the person by name. In security: Biometric verification systems provide a high level of authenticity and confidentiality. Biometric verification techniques are used for recognition of humans based on their behaviours or characteristics. To create alerts for particularly undesirable behaviour, video surveillance systems are being employed in order to analyze peoples’ movements and activities. Several banks and other departments are using these image processing-based video surveillance systems in order to detect undesired activities. In banking and finance: The use of image processing-based techniques is rapidly increasing in the field of financial services and banking. ‘Remote deposit capture’ is a banking facility that allows customers to deposit checks electronically using mobile devices or scanners. The data from the check image is extracted and used in place of a physical check. Face detection is also being used in the bank customer authentication process. Some banks use ‘facial-biometric’ to protect sensitive information. Signature verification and recognition also plays a significant role in authenticating the signature of the customers. However, a robust system used to verify handwritten signatures is still in need of development. This process has many challenges because handwritten signatures are imprecise in nature, as their corners are not always sharp, lines are not perfectly straight, and curves are not necessarily smooth. In marketing and advertisement: Some companies are using image-sharing through social media in order to track the influence of the company’s latest products/ advertisement. The tourist department uses images to advertise tourist destinations. In defence: Image processing, along with artificial intelligence, is contributing to defence based on two fundamental needs of the military: one is autonomous operation and the other is the use of outputs from a diverse range of sophisticated sensors for predicting danger/threats. In the Iran-Iraq war, remote sensing technologies were employed for the reconnaissance of enemy territory. Satellite images are analyzed in order to detect, locate and destroy weapons and defence systems used by enemy forces. In industrial automation: An unprecedented use of image processing has been seen in industrial automation. The ‘Automation of assembly lines’ system detects the position and orientation of the components. Bolting robots are used to detect the moving bolts. Automation of inspection of surface imperfection is possible due to image processing. The main objectives are to determine object quality and detect any abnormality in the products. Many industries also use classification of products by shape automation. In forensics: Tampered documents are widely used in criminal and civil cases, such as contested wills, financial paper work and professional business documentation. Documents like passports and driving licenses are frequently tampered with in order to be used illegally as identification proof. Forensic departments have to identify the authenticity of such suspicious documents. Identifying document forgery becomes increasingly challenging due to the availability of advanced document editing tools. The forger uses the latest technology to perfect his art. Computer scan documents are copied from one document to another to make them genuine. Forgery is not only confined to documents; it is also gaining popularity in images. Imagery has a remarkable role in various areas, such as forensic investigation, criminal investigation, surveillance systems, intelligence systems, sports, legal services, medical imaging, insurance claims and journalism. Almost a decade ago, Iran was accused of doctoring an image from its missile tests; the image was released on the official website, Iran’s Revolutionary Guard, which claimed that four missiles were heading skyward simultaneously. Almost all the newspaper and news magazine published this photo including, The Los Angeles Times, The Chicago Tribune and BBC News. Later on, it was revealed that only three missiles were launched successfully, one missile failed. The image was doctored in order to exaggerate Iran’s military capabilities. In underwater image restoration and enhancement: Underwater images are often not clear. These images have various problems, such as noise, low contrast, blurring, non-uniform lighting, etc. In order to restore visual clarity, image enhancement techniques are utilized. Digital image processing: the acquisition and processing of visual information by computer. It can be divided into two main application areas: (1) computer vision and (2) human vision, with image analysis being a key component of both. Computer vision applications: imaging applications where the output images are for computer use. Human vision applications: imaging applications where the output images are for human consumption. Digital image processing: concerns with techniques to perform processing on an image, to get an enhanced image or to extract some useful information from it to make some decisions based on it. Digital image processing techniques are growing at a very fast speed. Three levels of image processing operations are defined: Low-Level image processing: Primitive operations on images (e.g., contrast enhancement, noise reduction, etc.) are under this category, where both the input and the output are images. Mid-Level image processing: In this category, operations involving extraction of attributes (e.g., edges, contours, regions, etc.), from images are included. High-Level image processing: This category involves complex image processing operations related to analysis and interpretation of the contents of a scene for some decision making. Image processing involves many disciplines, mainly computer science, mathematics, psychology and physics. Other areas, such as artificial intelligence, pattern recognition, machine learning, and human vision, are also involved in image processing. Typical Image Processing Operations: Image processing involves a number of techniques and algorithms. The most representative image processing operations are: Binarization: Many image processing tasks can be performed by converting a color image or a grayscale image into binary in order to simplify and speed up processing. Conversion of a color or grayscale image to a binary image having only two levels of gray (black and white) is known as binarization. Smoothing: A technique that is used to blur or smoothen the details of objects in an image. Sharpening: Image processing techniques, by which the edges and fine details of objects in an image are enhanced for human viewing, are known as sharpening techniques. Noise Removal and De-blurring: Before processing, the amount of noise in images is reduced using noise removal filters. Image removal technique can sometimes be used, depending on the type of noise or blur in the image. Edge Extraction: To find various objects before analysing image contents, edge extraction is performed. Image restoration: the process of taking an image with some known, or estimated, degradation, and restoring it to its original appearance. Image enhancement: improving an image visually. Image compression: reducing the amount of data needed to represent an image. Image analysis: the examination of image data to solve an image processing problem. Image segmentation: used to find higher level objects from raw image data. The process of dividing an image into various parts is known as segmentation. For object recognition and classification segmentation is a pre-processing step. Feature extraction: acquiring higher level information, such as shape or color of objects. To find various objects before analysing image contents, edge extraction is performed. Image transforms: may be used in feature extraction to find spatial frequency information. Pattern classification: used for identifying objects in an image. Image: An image is a visual representation of an object, a person, or a scene. Image: 2D signal that varies over the spatial coordinates x and y, and can be written mathematically as f (x, y). OR An image is a 2-D function f(x, y) that is a projection of a 3-D scene into a 2-D projection plane, where x, y represents the location of the picture element or pixel and contains the intensity value. The amplitude of f is called intensity or gray level at the point (x, y) When values of x, y and intensity are discrete, then the image is said to be a digital image. The value of the function f (x, y) at every point indexed by a row and a column is called grey value or intensity of the image. The representation of an M×N numerical array as Discrete intensity interval [0, L-1], L=2k The number b of bits required to store a M × N digitized image b = M × N × k Spatial and Intensity Resolution Spatial resolution ▪ A measure of the smallest discernible detail in an image ▪ stated with line pairs per unit distance, dots (pixels) per unit distance, dots per inch (dpi) Intensity resolution ▪ The smallest discernible change in intensity level ▪ stated with 8 bits, 12 bits, 16 bits, etc. Image Coordinate System: Analog image in the first quadrant of Cartesian coordinate system. Discrete image in the first quadrant of Cartesian coordinate system. Image f(x, y) is divided into X rows and Y columns. Thus, the coordinate ranges are {x=0,1,…, X - 1} and {y=0, 1, 2,…, Y -1}. At the intersection of rows and columns, pixels are present. The word ‘pixel’ is an abbreviation of ‘picture element’. A typical digital image consists of millions of pixels. Pixels are considered the building blocks of digital images, as they combine together to give a digital image. Pixels represent discrete data. The value of the function f(x, y) at every point indexed by a row and a column is called grey value or intensity of the image. Generally, the value of the pixel is the intensity value of the image at that point. The intensity value is the sampled, quantized value of the light that is captured by the sensor at that point. It is a number and has no units. However, the value of the pixel is not always the intensity value. The number of rows in a digital image is called vertical resolution. The number of columns is called horizontal resolution. The number of rows and columns describes the dimensions of the image. The image size is often expressed in terms of the rectangular pixel dimensions of the array. Images can be of various sizes. Some examples of image size are 256 × 256, 512 × 512 etc. For a digital camera, the image size is defined as the total number of pixels (specified in megapixels). Total number of bits necessary to represent the image = Number of rows × Number of columns × Bit depth Bit-depth: It is the number of bits used to encode the pixel value. A digital image is made up of M × N pixels, and each of these pixels is represented by k bits. A pixel represented by k bits can have 2k different shades of gray in a grayscale image. These pixel values are generally integers, having values varying from 0 (black pixel) to 2k–1 (white pixel). The number of pixels in an image defines resolution and determines image quality Digital Images are an integral part of our digital life. Digital images are very important in today’s digital life. Advantages of Digital Images: The processing of images is faster and cost-effective. Digital images can be effectively stored and efficiently transmitted from one place to another. When shooting a digital image, one can immediately see if the image is good or not. Copying a digital image is easy. The quality of the digital image will not be degraded even if it is copied for several times. Whenever the image is in digital format, the reproduction of the image is both faster and cheaper. Digital technology offers plenty of scope for versatile image manipulation. Digital image formation process: Mathematically, a digital image is a matrix representation of a 2-D image using a finite number of points cell elements, usually referred to as pixels (picture elements, or pels). Each pixel is represented by numerical values: for grayscale images, a single value representing the intensity of the pixel (usually in a [0, 255] range) is enough; Grayscale image: monochrome or “one color” image that contains only brightness information, no color information, it is a one band image. A grayscale image, having only two intensities, is termed as a binary image. If an image has only two intensities, then the image is known as a binary image. Binary image: a simple image type that can take on two values, typically black and white, or ‘0’ and ‘1’ Element values denote the pixel grayscale intensities in the range [0,1] with 1 = white and 0 = black a binary image having two values, 1 (white) or 0 (black), is represented by an M×N logical matrix for color images, three values (representing the amount of red (R), green (G), and blue (B)) are stored. The color components of a pixel (m,n) are denoted as (m,n,1) = red, (m,n,2) = green, (m,n,3) = blue. Color image: modeled as a three-band monochrome image; the three bands are typically red, green, and blue (RGB) Fig. shows a color image and its red, green and blue components. The color image is a combination of these three images. Fig. shows the 8-bit grayscale image corresponding to color image shown in Fig. a. Figure 1.1 also shows the matrix representation of a small part of these images. Indexed: In MATLAB, Indexed (paletted) images are represented with an index matrix of size M×N and a colormap matrix of size K×3. The colormap matrix holds all colors used in the image and the index matrix represents the pixels by referring to colors in the colormap. uint8: In MATLAB, this type uses less memory and some operations compute faster than with double types. An image may be a grayscale image or a color image. In image processing operations, most of the operations are performed in grayscale images. For color image processing applications, a color image can be decomposed into Red, Green and Blue components and each component is processed independently as a grayscale image. For processing, an indexed image is converted to grayscale image or RGB color image for most operations. Drill Problem: What is the storage requirement of a 1024×1024, 8-level grey scale image? Solution: 1024 X 1024 X 8 = 8388608 bits = 1048576 bytes = 1048.576 KB (Assuming 1000 byes = 1 KB) Drill Problem: Consider a color 1024×1024 image. If this image is transmitted across a channel of 2 Mbps, what will be the transmission time? Solution: Given Size of the image = 1024×1024 Bit depth = 24 bits (8 bits per each R, G and B channels) Storage requirement =1024×1024×24 = 25165824 bits = 3145728 bytes. Transmission Time = 25165824 bits/2 Mbps 28-08-2024 Primary and Secondary Colors Tristimulus ✓ Additive primary colors: RGB use in the case of light sources such as computer monitors and TV sets emits such colors ❑Tri-stimulus values: amount of red (X), green (Y) and blue (Z) to form any particular color is RGB add together to get white called tri-stimulus. The primary colors are added to generate the secondary colors yellow ❑Tri-chromatic coefficients: (red + green), cyan (green + blue), magenta (red + blue). ✓ Subtractive primary colors: CMY use in the case of pigments in printing devices White subtracted by CMY to get Black In subtractive color formation, the color is generated by light reflected from its surroundings and the color does not emit any light of its own. In the subtractive colour system, black color is produced by a combination of all the colors. Color Models RGB Color Model ❑In the RGB color model, the primary colors are represented as: Red = (1,0,0), Green = (0,1,0), ❑The purpose of a color model (also called color space or color system) is to facilitate the Blue = (0,0,1). specification of colors in some standard, generally accept way. ❑and the secondary colors of RGB are represented as Cyan = (0,1,1), Magenta = (1,0,1), Yellow ❑RGB (red, green, blue) : monitor, video camera. = (1,1,0). ✓The RGB is an additive color model. The primary colors red, green and blue are combined to reproduce other colors. ❑The RGB model can be represented in the form of a color cube. ✓Here we assume that R, G, B are real numbers in the interval [0, 1]. R, G, B valuses are usually integers in the interval [0, 255]. ✓In this cube, black is at the origin (0,0,0), and ❑CMY(cyan, magenta, yellow). white is at the corner, where R = G = B = 1. ❑and HSI model, which corresponds closely with the way humans describe and interpret color. ✓The grayscale values from black to white are represented along the line joining these two points. Thus a grayscale value can be represented as (x, x, x) starting from black = (0,0,0) to white = (1,1,1). The CMY Color Spaces The CMY Color Spaces ❑The magenta is written in CMY as (0,1,0). In RGB it is written as that is, red and blue. ❑The red colour is written in RGB as (1,0,0). In CMY it is that is, magenta and yellow. 1 28-08-2024 HSI Color Model YIQ (luminance, inphase, quadrature) ❑RGB, CMY models are not good for human interpreting ❑Human describe a color by its hue, saturation, and brightness ❑Y: encodes luminance I, Q: encode color (chromaticity) ❑HSI Color model: ❑For black and white TV, only the Y channel is used. ✓Hue: Dominant color Color carrying ✓Saturation: Relative purity (inversely proportional to amount of white light added) ❑People are more sensitive to the luminance difference, so We can use more bits ✓Intensity: Brightness information (bandwidth) to encode Y and less bits to encode I and Q Relationship Between RGB and HSI Color Models Hue and Saturation on Color Planes ✓ RGB can be described by a 3-D cube, while the HSI model is represented as a color triangle. ✓ All colors lie inside the triangle whose vertices are defined by the three basic colors, R, G, and B. ✓ If a vector is drawn from the central point of the triangle to the color point P, then hue (H) is the angle of the 1) A dot is the plane is an arbitrary color vector with respect to the red axis. For example, 0º indicates red color, 60º yellow, 120º green, and so on. 2) Hue is an angle from a red axis. ✓ Saturation (S) is the degree to which the color is undiluted by white and is proportional to the distance to the canter of the triangle. 3) Saturation is a distance to the point. HSI Color Model Converting RGB to HSI ❑Hue is defined as an angle ✓0 degrees is RED ✓120 degrees is GREEN ✓240 degrees is BLUE ❑Saturation is defined as the percentage of distance from the center of the HSI triangle to the pyramid surface. ✓Values range from 0 to 1. ❑Intensity is denoted as the distance “up” the axis from black. ✓Values range from 0 to 1 2 28-08-2024 Drill Problem Converting HSI to RGB ❑Let the RGB values of a point is given as (0.2, 0.4, 0.6). Find the HSV equivalent of RGB? ❑Given that the Intensity and Saturation are maximum, hence the value of each of the RGB Drill Problem component would be 0 and 1. Colors RGB Combination Intensity / Saturation Monochrome color ✓Given an image with different colors. Write the RGB colors which would appear on R G B R G B monochrome display. Assume all colors are at maximum intensity and saturation. Also, show White R+G+B 1 1 1 255 255 255 each of the color in black and white considering them as 0 and 255 respectively. Megenta R+B 1 0 1 255 0 255 Blue B 0 0 1 0 0 255 Cyan G+B 0 1 1 0 255 255 Green G 0 1 0 0 255 0 Yellow R+G 1 1 0 255 255 0 Red R 1 0 0 255 0 0 Black NIL 0 0 0 0 0 0 ❑0 represents Black and 255 represents White. Also Gray is represented by 128. From Table R color series 255, 255, 0, 0, 255, 255, 0. Thus monochrome display has W, W, B, B, B, W, W, B. ❑Similarly the monochrome display for Green will be W, B, B, W, W, W, B, B and Bluse would be W, W, W, W, B, B, B, B, B. Drill Problem ✓Given an image with different colors. Sketch HIS components of the image on the monochromatic display. 3 28-08-2024 ❑We transform HIS by computing values of H, S and I for each component ❑For R = 1, G = 1, and B = 1 ✓H is coming 0 by calculation θ ✓S = 0, and I = 1 Colors Intensity / Intensity / Saturation Monochrome color Saturation R G B H S I H S I White 1 1 1 Can not be 0 1 - 0 255 computed Megenta 1 0 1 5/6 1 2/3 213 255 170 Blue 0 0 1 2/3 1 1/3 170 255 85 Cyan 0 1 1 1/2 1 2/3 128 255 170 Green 0 1 0 1/3 1 1/3 85 255 85 Yellow 1 1 0 1/6 1 2/3 43 255 170 Red 1 0 0 0 1 1/3 0 255 85 Black 0 0 0 - 0 0 - - 0 4 28-08-2024 Sampling and Quantization Scan AB to see the representation of Gray ❑How does the computer digitize the continuous image? White, represent high intensity 255 ✓Ex: scan a line such as AB from the continuous image, and represent the gray intensities. Digitizing the amplitude values White to Gray and Gray to White Digitizing the 2-D spatial coordinate (brightness level) is called values, called sampling quantization Black, represent low intensity 0 (b) A scan line showing intensity variations along line AB in the (a) Continuous image. continuous image. Gray-level scale that divides gray-level into 8 discrete levels ✓ Sampling rate is the number of samples in a fixed amount of time or space, ex: the spatial resolution (number of pixels) of the digitized image. the digital scanned line AB representation on computer: ✓ Quantization level is the equally spaced levels to which a signal is quantized, ex: the number of grey levels (number of bits) in the digitized image. Digitizing the amplitude values Digitizing the 2-D spatial coordinate (brightness level) is called values, called sampling quantization b Sampling Drill Problem: Sampling and Quantization Image sampling: discretize an image in the spatial domain Pixel f(a, b)= i Spatial resolution / image dx=dy=2mm resolution: pixel size or number of pixels a a, b → coordinates f(a, b): i intensity ❑The image observed by the acquisition device is projected on to the sensor array (a) where it is sampled and quantized (b). 16mm ❑The color of every pixel of the image (b) is obtained as the average color of the corresponding region in(a)(sampling),approximated at the closer gray level among dx=dy=1mm those available(quantization). Bandlimited Images 2-D: Comb function ❑An Image 𝑓(𝑥, 𝑦) is said to be band-limited if the Fourier transform 𝐹(𝑢, 𝑣) is zero outside a bounded region in the frequency plane i.e. m = 0,1,2...,M-1; n = 0,1,2...,N-1 ✓ ∆x and ∆y are vertical 𝑭(𝒖, 𝒗) = 𝟎 𝒇𝒐𝒓 | 𝒖 | > 𝒖𝟎, | 𝒗 | > 𝒗𝟎 and horizontal sampling intervals. ❑𝑢0, 𝑣0 Band-width of the image in the x- and y- directions. ✓ fs,x=1/∆x, fs,y=1/∆y are vertical and horizontal sampling frequencies. 2D view of a comb function 3D view of a comb function 1 28-08-2024 Sampled Image Foldover Frequencies ❑Sampling frequencies: ✓Let us and vs be the sampling frequencies ✓Then us > 2u0 ; vs > 2v0 ✓or Δx < 1/ 2u0; Δy < 1/ 2v0 ✓Frequencies above half the sampling frequencies are called fold over frequencies. Sampled Spectrum: Example Sampled Spectrum 𝑢𝑠 = 1 𝑣 ∆𝑥 1 𝑣𝑠 = ∆𝑦 2𝑣0 𝑢0 𝑢𝑠 𝑢 Sampled Spectrum: Example Sampled Spectrum: Example 2𝑢0 𝑣 𝑣 1 𝑢𝑠 = ∆𝑥 1 𝑣 1 𝑣𝑠 = 𝑢𝑠 = 1 ∆𝑦 ∆𝑥 𝑣𝑠 = 2𝑣0 2𝑣0 2𝑣0 ∆𝑦 2𝑣0 𝑢0 𝑢𝑠 2𝑢0 Under-sampling along the v direction 𝑢 𝑢𝑠 𝑢 𝑢𝑠 Under-sampling along the u and v direction 𝑢 Under-sampling along the u direction 2 28-08-2024 Reconstruction via LPF ❑F(u, v) can be recovered by a LPF with Drill Problem ❑Suppose an image of dimension 4 × 6 inches has details to the frequency of 400 dots per inch ❑R is any region whose boundary ∂R is contained within the annular ring between the in each direction. How many samples are required to preserve the information in the image? rectangles R1 and R2 in the figure. ✓The bandwidth is 400 in both the directions; therefore samples must be taken at 800 dots per ❑Reconstructed signal is inch in both the dimensions. A total of 4 × 800 × 3 × 800 = 7680000 samples are needed. Drill Problem ∞ ∞ Drill Problem (Cont’d) ❑An image described by the function 𝑓(𝑥, 𝑦) = 2 cos 3𝑥 + 4𝑦 is sampled at ∆x = ∆y ሚ 𝑦) = 0.4π. Find the reconstructed image 𝑓(𝑥, 𝐹𝑠 𝑢, 𝑣 = 25 [𝛿 𝑢 − 3 − 5𝑘, 𝑣 − 4 − 5𝑙 + 𝛿 𝑢 + 3 − 5𝑘, 𝑣 + 4 − 5𝑙 ] 𝑘=−∞ 𝑙=−∞ 𝑒𝑗 3𝑥+4𝑦 + 𝑒 −𝑗 3𝑥+4𝑦 𝐹 𝑒 𝑗𝜔0𝑡 = 2𝜋 𝛿 𝜔 − 𝜔0 cos 3𝑥 + 4𝑦 = ❑ The LPF has a rectangular passband with cutoff frequencies at half the Nyquist 2 frequencies i.e. 2 ∆𝑥 ∆𝑦 𝐹(𝑢, 𝑣) = 2𝜋 𝛿 𝑢 − 3, 𝑣 − 4 + 𝛿(𝑢 + 3, 𝑣 + 4) 𝐻 𝑢, 𝑣 = ቐ∆𝑥∆𝑦, 𝑢 ≤ 2 𝑣 ≤ 2 which is bandlimited since 𝐹(𝑢, 𝑣) = 0, 𝑢 > 3, 𝑣 > 4 0, 𝐸𝑙𝑠𝑒 Thus 𝑢0 = 3 and 𝑣0 = 4, The Nyquist rate is 𝑢𝑠 = 2𝑢0 = 6 and 𝑣𝑠 = 2𝑣0 = 8 2𝜋 2𝜋 2𝜋 2𝜋 0.4𝜋 2 , 𝑢 ≤ 2.5, 𝑣 ≤ 2.5 Also 𝑢𝑠 = ∆𝑥 = 0.4𝜋 = 5 and 𝑣𝑠 = ∆𝑦 = 0.4𝜋 = 5 𝐻 𝑢, 𝑣 = ቊ 0, 𝐸𝑙𝑠𝑒 Thus aliasing is inventible. The spectrum of sampled image is ❑ After filtering we obtain ∞ ∞ ෩ 𝒖, 𝒗 = 𝑯(𝒖, 𝒗)𝑭𝒔 𝒖, 𝒗 𝑭 1 2𝜋 2𝜋 𝐹෨ 𝑢, 𝑣 = 2𝜋 2 [𝛿 𝑢 − 2, 𝑣 − 1 + 𝛿 𝑢 + 2, 𝑣 + 1 𝐹𝑠 𝑢, 𝑣 = 𝐹 𝑢− 𝑘, 𝑣 − 𝑙 ∆𝑥∆𝑦 ∆𝑥 ∆𝑦 𝑘=−∞ 𝑙=−∞ 𝑓ሚ 𝑥, 𝑦 = 2cos 2𝑥 + 𝑦 Drill Problem ❑Given a 2D image 𝒇(𝒙, 𝒚) = cos(𝟐𝝅𝒙 + 𝟔𝝅𝒚). a) Determine its Fourier transform 𝐹(𝑢, 𝑣) and illustrate the spectrum (i.e., the impulses in the transform) b) Suppose this signal is sampled uniformly with sampling intervals ∆𝑥 = ∆𝑦 = ∆= 1/4. Draw the spectrum of the sampled signal. c) If the sampled signal is interpolated by an ideal low-pass filter ℎ1(𝑥, 𝑦) with frequency response 1 where 𝑓𝑠 = ∆. Draw the spectrum of the reconstructed signal. Give the spatial representation of the reconstructed signal 𝑓𝑟1 1(𝑥, 𝑦). a) Suppose that a bilinear interpolation filter ℎ2 (𝑥, 𝑦) = ℎ(𝑥)ℎ(𝑦) is used instead. Its filter response can be written as ℎ2 𝑥, 𝑦 = ℎ 𝑥 ℎ 𝑦 , Give the filter response in the frequency domain, H 2(u,v). If the filter is further bandlimited to –fs