examfromInternet.txt
Document Details
Uploaded by SnazzySaxophone
Assumption University of Thailand
Full Transcript
1. What is Digital Image Processing? a) It’s an application that alters digital videos b) It’s a software that allows altering digital pictures c) It’s a system that manipulates digital medias d) It’s a machine that allows altering digital images View Answer Answer: b Explanation: Digital Image Pro...
1. What is Digital Image Processing? a) It’s an application that alters digital videos b) It’s a software that allows altering digital pictures c) It’s a system that manipulates digital medias d) It’s a machine that allows altering digital images View Answer Answer: b Explanation: Digital Image Processing (DIP) is a software that allows you to alter digital images using a computer. It’s also used to improve images and extract useful information from them. 2. Which of the following process helps in Image enhancement? a) Digital Image Processing b) Analog Image Processing c) Both a and b d) None of the above View Answer Answer: c Explanation: The process of digitally modifying a stored image with software is known as image enhancement. 3. Among the following, functions that can be performed by digital image processing is? a) Fast image storage and retrieval b) Controlled viewing c) Image reformatting d) All of the above View Answer Answer: d Explanation: Functions that can be performed by digital image processing are: Image reconstruction Image reformatting Dynamic range image data acquisition Image processing Fast image storage and retrieval Fast and high-quality image distribution Controlled viewing Image analysis 4. Which of the following is an example of Digital Image Processing? a) Computer Graphics b) Pixels c) Camera Mechanism d) All of the mentioned View Answer Answer: d Explanation: Digital Image Processing is a type of image processing software. Computer graphics, signals, photography, camera mechanisms, pixels, etc are examples. 5. What are the categories of digital image processing? a) Image Enhancement b) Image Classification and Analysis c) Image Transformation d) All of the mentioned View Answer Answer: d Explanation: Digital image processing is categorized into: i. Preprocessing ii. Image Enhancement iii. Image Transformation iv. Image Classification and Analysis advertisement 6. How does picture formation in the eye vary from image formation in a camera? a) Fixed focal length b) Varying distance between lens and imaging plane c) No difference d) Variable focal length View Answer Answer: d Explanation: The ciliary body’s fibers change the curvature of the lens, changing its focal length. 7. What are the names of the various colour image processing categories? a) Pseudo-color and Multi-color processing b) Half-color and pseudo-color processing c) Full-color and pseudo-color processing d) Half-color and full-color processing View Answer 8. Which characteristics are taken together in chromaticity? a) Hue and Saturation b) Hue and Brightness c) Saturation, Hue, and Brightness d) Saturation and Brightness View Answer Answer: a Explanation: The combination of hue and saturation is known as chromaticity, and a color’s brightness and chromaticity can be used to describe it. 9. Which of the following statement describe the term pixel depth? a) It is the number of units used to represent each pixel in RGB space b) It is the number of mm used to represent each pixel in RGB space c) It is the number of bytes used to represent each pixel in RGB space d) It is the number of bits used to represent each pixel in RGB space View Answer Answer: d Explanation: The RGB color model represents images as three-component images, one for each primary color. These three images mix on the phosphor screen to generate a composite color image when input into an RGB display. The pixel depth refers to the number of bits required to represent each pixel in RGB space. 10. The aliasing effect on an image can be reduced using which of the following methods? a) By reducing the high-frequency components of image by clarifying the image b) By increasing the high-frequency components of image by clarifying the image c) By increasing the high-frequency components of image by blurring the image d) By reducing the high-frequency components of image by blurring the image View Answer Answer: d Explanation: By adding additional frequency components to the sampled function, aliasing corrupts the sampled image. As a result, the most common method for decreasing aliasing effects on an image is to blur the image prior to sampling to lower its high-frequency components. 11. Which of the following is the first and foremost step in Image Processing? a) Image acquisition b) Segmentation c) Image enhancement d) Image restoration View Answer Answer: a Explanation: The initial step in image processing is image acquisition. It’s worth noting that acquisition might be as simple as being provided a digital image. Preprocessing, such as scaling, is usually done during the image acquisition stage. 12. Which of the following image processing approaches is the fastest, most accurate, and flexible? a) Photographic b) Electronic c) Digital d) Optical View Answer Answer: c Explanation: Because it is fast, accurate, and dependable, digital image processing is a more versatile and agile technology. 13. Which of the following is the next step in image processing after compression? a) Representation and description b) Morphological processing c) Segmentation d) Wavelets View Answer Answer: b Explanation: Steps in image processing: Step 1: Image acquisition Step 2: Image enhancement Step 3: Image restoration Step 4: Color image processing Step 5: Wavelets and multi-resolution processing Step 6: Compression Step 7: Morphological processing Step 8: Segmentation Step 9: Representation & description Step 10: Object recognition 14. ___________ determines the quality of a digital image. a) The discrete gray levels b) The number of samples c) discrete gray levels & number of samples d) None of the mentioned View Answer Answer: c Explanation: The number of samples and discrete grey levels employed in sampling and quantization determine the quality of a digital image. 15. Image processing involves how many steps? a) 7 b) 8 c) 13 d) 10 View Answer Answer: d Explanation: Steps in image processing: Step 1: Image acquisition Step 2: Image enhancement Step 3: Image restoration Step 4: Color image processing Step 5: Wavelets and multi-resolution processing Step 6: Compression Step 7: Morphological processing Step 8: Segmentation Step 9: Representation & description Step 10: Object recognition 16. Which of the following is the abbreviation of JPEG? a) Joint Photographic Experts Group b) Joint Photographs Expansion Group c) Joint Photographic Expanded Group d) Joint Photographic Expansion Group View Answer Answer: a Explanation: Most computer users are aware of picture compression in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard. 17. Which of the following is the role played by segmentation in image processing? a) Deals with property in which images are subdivided successively into smaller regions b) Deals with partitioning an image into its constituent parts or objects c) Deals with extracting attributes that result in some quantitative information of interest d) Deals with techniques for reducing the storage required saving an image, or the bandwidth required transmitting it View Answer Answer: b Explanation: Segmentation is a technique for dividing a picture into its component components or objects. In general, one of the most difficult tasks in digital image processing is autonomous segmentation. A robust segmentation approach takes the process a long way toward solving image challenges that need individual object identification. 18. The digitization process, in which the digital image comprises M rows and N columns, necessitates choices for M, N, and the number of grey levels per pixel, L. M and N must have which of the following values? a) M have to be positive and N have to be negative integer b) M have to be negative and N have to be positive integer c) M and N have to be negative integer d) M and N have to be positive integer View Answer Answer: d Explanation: The digitization process, in which the digital image contains M rows and N columns, necessitates choices for M, N, and the maximum grey level number, L. Further than the fact that M and N must be positive integers, there are no other constraints for M and N. 19. Which of the following tool is used in tasks such as zooming, shrinking, rotating, etc.? a) Filters b) Sampling c) Interpolation d) None of the Mentioned View Answer Answer: c Explanation: The basic tool for zooming, shrinking, rotating, and other operations is interpolation. 20. The effect caused by the use of an insufficient number of intensity levels in smooth areas of a digital image _____________ a) False Contouring b) Interpolation c) Gaussian smooth d) Contouring View Answer Answer: a Explanation: The ridges resemble the contours of a map, hence the name. 21. What is the procedure done on a digital image to alter the values of its individual pixels known as? a) Geometric Spacial Transformation b) Single Pixel Operation c) Image Registration d) Neighbourhood Operations View Answer Answer: b Explanation: It’s written as s=T(z), where z is the intensity, and T is the transformation function. 22. Points whose locations are known exactly in the input and reference images are used in Geometric Spacial Transformation. a) Known points b) Key-points c) Réseau points d) Tie points View Answer Answer: d Explanation: Tie points, also known as Control points, are spots in input and reference images whose locations are known precisely. 23. ___________ is a commercial use of Image Subtraction. a) MRI scan b) CT scan c) Mask mode radiography d) None of the Mentioned View Answer Answer: c Explanation: Mask mode radiography, which is based on Image Subtraction, is an important medical imaging field. 24. Approaches to image processing that work directly on the pixels of incoming image work in ____________ a) Spatial domain b) Inverse transformation c) Transform domain d) None of the Mentioned View Answer Answer: a Explanation: In the Spatial Domain, operations on pixels of an input image work directly. 25. Which of the following in an image can be removed by using a smoothing filter? a) Sharp transitions of brightness levels b) Sharp transitions of gray levels c) Smooth transitions of gray levels d) Smooth transitions of brightness levels View Answer Answer: b Explanation: The value of each pixel in an image is replaced by the average value of the grey levels in a smoothing filter. As a result, the sharp transitions in grey levels between pixels are reduced. This is done because random noise generally has strong gray-level transitions. 26. Region of Interest (ROI) operations is generally known as _______ a) Masking b) Dilation c) Shading correction d) None of the Mentioned View Answer Answer: a Explanation: Masking, commonly known as the ROI operation, is a typical use of image multiplication. 27. Which of the following comes under the application of image blurring? a) Image segmentation b) Object motion c) Object detection d) Gross representation View Answer Answer: d Explanation: The blurring of an image with the aim of obtaining a gross representation of interesting items, so that the intensity of small objects mixes with the background and large objects become easier to distinguish, is an essential use of spatial averaging. 28. Which of the following filter’s responses is based on the pixels ranking? a) Sharpening filters b) Nonlinear smoothing filters c) Geometric mean filter d) Linear smoothing filters View Answer Answer: b Explanation: Order static filters are nonlinear smoothing spatial filters that respond by ordering or ranking the pixels in the image area covered by the filter, and then replacing the value of the central pixel with the result of the ranking. 29. Which of the following illustrates three main types of image enhancing functions? a) Linear, logarithmic and power law b) Linear, logarithmic and inverse law c) Linear, exponential and inverse law d) Power law, logarithmic and inverse law View Answer Answer: d Explanation: The three fundamental types of functions used often for picture improvement are shown in an introduction to gray-level transformations: linear (negative and identity transformations), logarithmic (log and inverse-log transformations), and power-law transformations (nth power and nth root transformations). The identity function is the simplest situation, in which the output and input intensities are the same. It’s just included in the graph for completeness’ sake. 30. Which of the following is the primary objective of sharpening of an image? a) Decrease the brightness of the image b) Increase the brightness of the image c) Highlight fine details in the image d) Blurring the image View Answer Answer: c Explanation: Sharpening an image aids in highlighting small features in the image or enhancing details that have become blurred owing to factors such as noise addition. 31. Which of the following operation is done on the pixels in sharpening the image, in the spatial domain? a) Differentiation b) Median c) Integration d) Average Q. No 1. Image processing implies digital processing of any ___ data. (a) One-dimensional (b) Two-dimensional (c) Three-dimensional (d) Multi-dimensional Answer: (b) Two-dimensional Q. No 2. The field of “digital image processing” refers to the processing of a finite number of elements, each of which has a particular location and value. These elements are referred to as ___ (a) Data elements (b) Point elements (c) Picture elements (d) Graphical elements Answer: (c) Picture elements Q. No 3. Which are not the digital image processing broad-spectrum applications? (a) Image transmission (b) Remote sensing via satellites (c) Storage for business applications (d) None of the above Answer: (d) None of the above Q. No 4. ___ is the foundation for representing images in various degrees of resolution. (a) Wavelets (b) Compression (c) Morphological (d) Image enhancement Answer: (a) Wavelets Q. No 5. To acquire a digital image ___elements are required. (a) 4 (b) 2 (c) 3 (d) 5 Answer: (b) 2 Q. No 6. The output of a single imaging sensor is ___ (a) Unidirectional Waveform (b) Alternating Waveform (c) Voltage Waveform (d) Square wave Waveform Answer: (c) Voltage Waveform Q. No 7. ___ is the most widely used term to denote the elements of a digital image. (a) Pixel (b) Number (c) Coordinates (d) None Answer: (a) Pixel Q. No 8. ___ is used to calculate the spatial distance between pixel locations. (a) Euclidean distance (b) Speed measures (c) Distance measures (d) D4 & D8 Distance Answer: (c) Distance measures Q. No 9. GIF stands for (a) Graphics Interchange Format (b) Graphical Interface Format (c) Graphical Interface Function (d) Graphics Integrated Format Answer: (a) Graphics Interchange Format Q. No 10. ___ uses a lossy compression method, meaning that the decompressed image isn’t quite the same as the original. (a) GIF (Graphics Interchange Format) (b) JPEG (Joint Photographic Experts Group) (c) TIFF (Tagged Image File Format) (d) PNG (Portable Network Graphics) Answer: (b) JPEG (Joint Photographic Experts Group) Q. No 11. Identify the below formula: C = A ᴖ B (a) The union of A and B (b) The intersection of A and B (c) A and B is disjoint (d) The difference between A and B Answer: (b) The intersection of A and B Q. No 12. In ___ the intensity of the output image decreases as the intensity of the input increases. (a) Image Positives (b) Image Negatives (c) Image Enhancement (d) Image restoration Answer: (b) Image Negatives Q. No 13. ___ is used to increase the dynamic range of the gray levels in the image being processed. (a) Contrast Range (b) Contrast Image (c) Contrast Stretching (d) Intensity Stretching Answer: (c) Contrast Stretching Q. No 14. A ___ is a small 2-D array, in which the values of the coefficients determine the nature of the process. (a) Mask (b) Fourier Spectrum (c) Static Histogram (d) Dynamic Histogram Answer: (a) Mask Q. No 15. ___ process an image with pixel-by-pixel transformation based on the histogram statistics or neighborhood operations. (a) Frequency domain methods (b) Frequency filtering methods (c) Spatial domain methods (d) None Answer: (c) Spatial domain methods Q. No 16. The values in the filter sub-image are referred to as ___ (a) Pixel (b) Coefficients (c) Coordination (d) Constants Answer: (b) Coefficients Q. No 17. The process of linear filtering is similar to a frequency domain concept called___ (a) Mask (b) Filter (c) Convolution (d) Kernel Answer: (c) Convolution Q. No 18. Operate on neighborhoods, and the mechanics of sliding a mask past an image are the same as was just outlined. (a) Nonlinear spatial filters (b) Linear spatial filters (c) Smoothing Linear Filters (d) Averaging filters Answer: (a) Nonlinear spatial filters Q. No 19. ___ filters are used to highlight fine detail in an image (a) Linear spatial filters (b) Sharpening spatial filters (c) Frequency filtering (d) Low pass filters Answer: (b) Sharpening spatial filters Q. No 20. The tool, which converts a spatial description of an image into one in terms of its frequency components, is called the ___. (a) Fourier transforms (b) Inverse Fourier Transform (c) Discrete Fourier transforms (d) None Answer: (a) Fourier transforms Q. No 21. The Fourier space description back into real space one called the ___ (a) Fourier Transform (b) Inverse Fourier Transform (c) Discrete Fourier transforms (d) Integral Fourier Transform Answer: (b) Inverse Fourier Transform Q. No 22. The most fundamental relationship between the spatial and frequency domains is established by a well-known result called the ___ (a) Convolution theorem (b) Correspondence Theorem (c) Spatial Theorem (d) Correlation theorem Answer: (a) Convolution theorem Q. No 23. ___ is the most common approach for detecting meaningful discontinuities in the gray level. (a) Line Detection (b) Edge Detection (c) Point Detection (d) Circle Detection Answer: (b) Edge Detection Q. No 24. Since the Laplacian is a derivative, the sum of the coefficients has to be ___ (a) Zero (b) One (c) Two (d) Three Answer: (a) Zero Q. No 25. ___ is designed with suitable coefficients and is applied at each point in an image. (a) Masks (b) Segments (c) Regions (d) Pattern templates Answer: (d) Pattern templates Image Processing Multiple Choice Questions with Answers Q. No 1. Image processing implies digital processing of any ___ data. (a) One-dimensional (b) Two-dimensional (c) Three-dimensional (d) Multi-dimensional Answer: (b) Two-dimensional Q. No 2. The field of “digital image processing” refers to the processing of a finite number of elements, each of which has a particular location and value. These elements are referred to as ___ (a) Data elements (b) Point elements (c) Picture elements (d) Graphical elements Answer: (c) Picture elements Q. No 3. Which are not the digital image processing broad-spectrum applications? (a) Image transmission (b) Remote sensing via satellites (c) Storage for business applications (d) None of the above Answer: (d) None of the above Q. No 4. ___ is the foundation for representing images in various degrees of resolution. (a) Wavelets (b) Compression (c) Morphological (d) Image enhancement Answer: (a) Wavelets Q. No 5. To acquire a digital image ___elements are required. (a) 4 (b) 2 (c) 3 (d) 5 Answer: (b) 2 Q. No 6. The output of a single imaging sensor is ___ (a) Unidirectional Waveform (b) Alternating Waveform (c) Voltage Waveform (d) Square wave Waveform Answer: (c) Voltage Waveform Q. No 7. ___ is the most widely used term to denote the elements of a digital image. (a) Pixel (b) Number (c) Coordinates (d) None Answer: (a) Pixel Q. No 8. ___ is used to calculate the spatial distance between pixel locations. (a) Euclidean distance (b) Speed measures (c) Distance measures (d) D4 & D8 Distance Answer: (c) Distance measures Q. No 9. GIF stands for (a) Graphics Interchange Format (b) Graphical Interface Format (c) Graphical Interface Function (d) Graphics Integrated Format Answer: (a) Graphics Interchange Format Q. No 10. ___ uses a lossy compression method, meaning that the decompressed image isn’t quite the same as the original. (a) GIF (Graphics Interchange Format) (b) JPEG (Joint Photographic Experts Group) (c) TIFF (Tagged Image File Format) (d) PNG (Portable Network Graphics) Answer: (b) JPEG (Joint Photographic Experts Group) Q. No 11. Identify the below formula: C = A ᴖ B (a) The union of A and B (b) The intersection of A and B (c) A and B is disjoint (d) The difference between A and B Answer: (b) The intersection of A and B Q. No 12. In ___ the intensity of the output image decreases as the intensity of the input increases. (a) Image Positives (b) Image Negatives (c) Image Enhancement (d) Image restoration Answer: (b) Image Negatives Q. No 13. ___ is used to increase the dynamic range of the gray levels in the image being processed. (a) Contrast Range (b) Contrast Image (c) Contrast Stretching (d) Intensity Stretching Answer: (c) Contrast Stretching Q. No 14. A ___ is a small 2-D array, in which the values of the coefficients determine the nature of the process. (a) Mask (b) Fourier Spectrum (c) Static Histogram (d) Dynamic Histogram Answer: (a) Mask Q. No 15. ___ process an image with pixel-by-pixel transformation based on the histogram statistics or neighborhood operations. (a) Frequency domain methods (b) Frequency filtering methods (c) Spatial domain methods (d) None Answer: (c) Spatial domain methods Q. No 16. The values in the filter sub-image are referred to as ___ (a) Pixel (b) Coefficients (c) Coordination (d) Constants Answer: (b) Coefficients Q. No 17. The process of linear filtering is similar to a frequency domain concept called___ (a) Mask (b) Filter (c) Convolution (d) Kernel Answer: (c) Convolution Q. No 18. Operate on neighborhoods, and the mechanics of sliding a mask past an image are the same as was just outlined. (a) Nonlinear spatial filters (b) Linear spatial filters (c) Smoothing Linear Filters (d) Averaging filters Answer: (a) Nonlinear spatial filters Q. No 19. ___ filters are used to highlight fine detail in an image (a) Linear spatial filters (b) Sharpening spatial filters (c) Frequency filtering (d) Low pass filters Answer: (b) Sharpening spatial filters Q. No 20. The tool, which converts a spatial description of an image into one in terms of its frequency components, is called the ___. (a) Fourier transforms (b) Inverse Fourier Transform (c) Discrete Fourier transforms (d) None Answer: (a) Fourier transforms Q. No 21. The Fourier space description back into real space one called the ___ (a) Fourier Transform (b) Inverse Fourier Transform (c) Discrete Fourier transforms (d) Integral Fourier Transform Answer: (b) Inverse Fourier Transform Q. No 22. The most fundamental relationship between the spatial and frequency domains is established by a well-known result called the ___ (a) Convolution theorem (b) Correspondence Theorem (c) Spatial Theorem (d) Correlation theorem Answer: (a) Convolution theorem Q. No 23. ___ is the most common approach for detecting meaningful discontinuities in the gray level. (a) Line Detection (b) Edge Detection (c) Point Detection (d) Circle Detection Answer: (b) Edge Detection Q. No 24. Since the Laplacian is a derivative, the sum of the coefficients has to be ___ (a) Zero (b) One (c) Two (d) Three Answer: (a) Zero Q. No 25. ___ is designed with suitable coefficients and is applied at each point in an image. (a) Masks (b) Segments (c) Regions (d) Pattern templates Answer: (d) Pattern templates Q. No 26. ___ is one of the most important approaches to image segmentation and can be treated as the class boundary. (a) Region-Based Segmentation (b) Thresholding (c) Region Growing (d) Region Segmentation Answer: (b) Thresholding Q. No 27. In the first category of segmentation algorithm, the approach is to partition an image based on abrupt changes in intensity, such as ___. (a) Range in an image (b) Points in an image (c) Pictures in an image (d) Edges in an image Answer: (d) Edges in an image Q. No 28. ___ derivatives of a digital image are based on various approximations of the 2-D gradient. (a) First order (b) Second-order (c) Third order (d) Fourth order Answer: (a) First order Q. No 29. Gray-scale digital images can be represented as sets whose components are in ___. (a) Z2 (b) Z3 (c) Z4 (d) Z5 Answer: (b) Z3 Q. No 30. ___ smoothes the contour of an object, breaks narrow isthmuses, and eliminates thin protrusions. (a) Closing operation (b) Miss or Hit operation (c) Opening operation (d) Set operation Answer: (c) Opening operation Q. No 31. A morphological hit-or-miss transform is a basic tool for ___. (a) Result detection (b) Shape detection (c) Data detection (d) Record detection Answer: (b) Shape detection Q. No 32. ___are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. (a) Polygonal Approximations (b) Chain Codes (c) Polygonal codes (d) Simple codes Answer: (b) Chain Codes Q. No 33. A ___ is a 1-D functional representation of a boundary and may be generated in various ways. (a) Chain Codes (b) Boundary Segments (c) Signature (d) Decomposition Answer: (c) Signature Q. No 34. Topological properties are useful for ___descriptions of regions in the image plane. (a) local (b) National (c) Global (d) LAN Answer: (c) Global Q. No 35. ___ is the total amount of energy that flows from the light source, and it is usually measured in watts (W). (a) Luminance (b) Radiance (c) Brightness (d) Saturation Answer: (b) Radiance Q. No 36. A ___ is a specification of a coordinate system and a subspace within that system where each color is represented by a single point. (a) Color model (b) RGB color model (c) The CMY and CMYK Color Models (d) HSI color model Answer: (a) Color model Q. No 37. ___ gives a measure of the amount of energy an observer perceives from a light source. It is measured in lumens (lm). (a) Brightness (b) Intensity (c) Density (d) Luminance Answer: (d) Luminance Q. No 38. The ___ is responsible for reducing or eliminating any coding, inter-pixel, or psycho-visual redundancies in the input image. (a) Encoder (b) Destination (c) Source (d) Decoder Answer: (c) Source Q. No 39. The ___ transforms the input into a format to reduce interpixel redundancies in the output image. (a) Mapper (b) Encoder (c) Decoder (d) Coder Answer: (a) Mapper Q. No 40. ___is present when the codes do not take full advantage of the probabilities of the events. (a) Psycho visual Redundancy (b) Coding Redundancy (c) Inter pixel redundancy (d) Fidelity Criteria Answer: (b) Coding Redundancy Q. No 41. The sensor could be a monochrome or color TV camera that produces an entire image of the problem domain every ___. (a) 2/30 seconds (b) 1/30 seconds (c) 1/60 seconds (d) 1/90 seconds Answer: (b) 1/30 seconds Q. No 42. Which is not a principal category of digital storage for image processing applications? (1) Short-term storage (2) Online storage (3) Archival storage (a) Only 1, 2 (b) Only 2, 3 (c) Only 1, 3 (d) All 1, 2, 3 Answer: (d) All 1, 2, 3 Q. No 43. State whether the following statements are true or false 1. Digitizing the coordinate values is called quantization. 2. Digitizing the amplitude values is called sampling (a) 1-T, 2-F (b) 1-F, 2-T (c) 1-T, 2-T (d) 1-F, 2-F Answer: (d) 1-F, 2-F Q. No 44. The Euclidean distance between p and q is defined as (a) De (p, q) = [(x-s) 2 + (y-t) 2]1/2 (b) De (p, q) = |x-s| + |y-t| (c) De (p, q) = max (|x-s|, |y-t|) (d) De (p, q) = |x-s| + |y-t|2 Answer: (a) De (p, q) = [(x-s) 2 + (y-t) 2]1/2 Q. No 45. Identify the given table P Q R 0 0 0 0 1 0 0 0 1 1 1 1 (a) AND operation (b) OR operation (c) XOR operation (d) NOT operation Answer: (a) AND operation Q. No 46. State whether the following statements are true or false for JPEG development reasons: 1. It makes image files smaller. 2. It stores 16-bit per pixel color data instead of 8-bit per pixel data. (a) 1-F, 2-T (b) 1-T, 2-F (c) 1-T, 2-T (d) 1-F, 2-F Answer: (b) 1-T, 2-F Q. No 47. A ___ Image has a histogram that will be narrow and will be centered toward the ___ of the grayscale. (a) High contrast, Corner (b) High contrast, middle (c) Low contrast, middle (d) Low contrast, Corner Answer: (c) Low contrast, middle Q. No 48. The gray levels in an image may be viewed as random quantities in the interval___. (a) [1, 10] (b) [1, 0] (c) [10, 1] (d) [0, 1] Answer: (d) [0, 1] Q. No 49. “Which of the following are applications where image sharpening can be used? 1. Electronic printing 2. Medical Imaging 3. Autonomous guidance in military systems (a) 1, 2 only (b) 2, 3 only (c) 1, 3 only (d) 1, 2, 3 all Answer: (d) 1, 2, 3 all Q. No 50. Which of the following are types of correlation? 1. Autocorrelation 2. Inter-correlation 3. frequency-correlation (a) 1, 2 only (b) 2, 3 only (c) 1 only (d) 2 only Answer: (c) 1 only Questions and Answers 1. Consider image I (0-16 level)= [ 1 2 3; 0 1 8; 1 2 2]. Apply 3*3 mean filter over the only pixel with intensity value “8,” new intensity value will be A. 6 B. 8 C. 2 D. 16 Correct Answer C. 2 2. If the pixels of an image are shuffled, then the parameter that may change is A. Histogram B. Mean C. Median D. Mode E. None Correct Answer E. None Explanation When the pixels of an image are shuffled, the arrangement of the pixels changes but the actual values of the pixels remain the same. This means that the histogram, mean, median, and mode of the image will all stay the same. Therefore, none of these parameters will change when the pixels of an image are shuffled. Rate this question: 3. The sum of all elements in the mask of the sharpening spatial filtering (Laplacian) must be equal to A. M rows B. n columns C. M * n D. 0 Correct Answer D. 0 Explanation In the sharpening spatial filtering (Laplacian), the sum of all elements in the mask must be equal to 0. This is because the Laplacian filter is designed to enhance the high-frequency components in an image, which are represented by the edges and details. By setting the sum of the mask elements to 0, the filter ensures that the overall intensity of the image is preserved while emphasizing the edges and details. Rate this question: 4. Edge detection in images is commonly accomplished by performing a spatial ------of the image field. A. Smoothing Filter B. Box Filter C. Sharpening Filter D. Mean Filter Correct Answer C. Sharpening Filter Explanation Edge detection in images is commonly accomplished by performing a spatial sharpening filter on the image field. This filter enhances the high-frequency components in the image, making the edges more pronounced. By increasing the contrast between adjacent pixels, the sharpening filter helps to identify and highlight the edges present in the image. This is why the sharpening filter is a commonly used technique for edge detection in image processing. Rate this question: 5. To remove "salt-and-pepper" noise without blurring, we use A. Max Filter B. Median Filter C. Min Filter D. Smoothing Filter Correct Answer B. Median Filter Explanation The correct answer is Median Filter. The Median Filter is used to remove "salt-and-pepper" noise without blurring the image. It replaces each pixel value with the median value of its neighboring pixels. This filter is effective in removing isolated noise pixels while preserving the edges and details in the image. Rate this question: 6. Compute the median value of the marked pixels shown in the figure below using a 3x3 mask [18 22 33 25 32 24; 34 128 24 172 26 23; 22 19 32 31 28 26 ] A. 25 30 B. 24 26 C. 24 31 D. 25 34 Correct Answer C. 24 31 Explanation The given answer, 24 31, is the median value of the marked pixels in the figure. To find the median, we arrange the marked pixels in ascending order: 24, 24, 25, 26, 31, 34. Since there are 6 pixels, the median is the average of the 3rd and 4th values, which are 25 and 26. Therefore, the median value is 24 31. Rate this question: 7. How do you bring out more of the skeletal detail from a Nuclear Whole Body Bone Scan? A. Sharpening B. Enhancing C. Transformation D. None of the mentioned Correct Answer A. Sharpening Explanation Sharpening is a technique used to enhance the clarity and detail of an image. In the context of a Nuclear Whole Body Bone Scan, sharpening can be used to bring out more of the skeletal detail, making it easier to identify any abnormalities or issues. By applying sharpening algorithms or filters, the image can be enhanced to highlight the bones and their structure, providing a clearer and more detailed view. Rate this question: 8. The type of Histogram Processing in which pixels are modified based on the intensity distribution of the image is called _______________. A. Intensive B. Local C. Global D. Random Correct Answer C. Global Explanation In global histogram processing, the modification of pixels is based on the intensity distribution of the entire image. This means that the adjustments are applied uniformly to all pixels in the image, regardless of their location or context. Global histogram processing is commonly used to enhance contrast, brightness, or overall appearance of an image. It is a widely used technique in image processing and allows for global adjustments to be made to the entire image at once. Rate this question: 9. In the _______ image, we notice that the components of the histogram are concentrated of the low side of the intensity scale. A. Bright B. dark C. Colorful D. All of the Mentioned Correct Answer B. dark Explanation The given correct answer is "dark". In a dark image, the components of the histogram are concentrated on the low side of the intensity scale. This means that the majority of the pixels in the image have low intensity values, resulting in a darker overall appearance. Rate this question: 10. Two Different images may have the same histograms. The statement is A. True B. False Correct Answer A. True Explanation Two different images may have the same histograms because histograms only represent the frequency distribution of pixel intensities in an image. It does not provide any information about the actual content or appearance of the image. Therefore, it is possible for two different images to have the same distribution of pixel intensities and consequently the same histogram.