Lec1 Introduction to Vision and Image Processing1.pptx
Document Details
Uploaded by PurposefulGorgon
Tags
Full Transcript
Digital Image Processing Introduction to Vision and Visual System DR. MUHAMMAD AHS AN ANS ARI ASSOCIATE PROFESSOR DEPARTMENT OF CSE, MEHRAN UET JAMSHORO. Course Learning Outcomes CLO# CLO Description Taxonomy PLO Level 1 Und...
Digital Image Processing Introduction to Vision and Visual System DR. MUHAMMAD AHS AN ANS ARI ASSOCIATE PROFESSOR DEPARTMENT OF CSE, MEHRAN UET JAMSHORO. Course Learning Outcomes CLO# CLO Description Taxonomy PLO Level 1 Understand the need of digital image C2 1 processing and confluence of different fields as well s the emerging problems in this area. 2 Understand and implement digital image C3 3 processing techniques and design image filters in spatial domains Syllabus: ◦ Basics of an Image ◦ Imaging Geometry ◦ Camera Modeling and Calibration ◦ Filtering and Enhancing Images ◦ Line and Curve Detection ◦ Fourier Transforms on Images References: 1) Digital Image Processing By R.C. Gonzalez and R.E.Woods 2003, Pearson Edition 2) Digital Image Processing Using MATLAB By: Gonzalez, Woods, and Eddins, Prentice Hall, 2004 http://www.imageprocessingplace.com/ 3) Fundamentals of Computer Vision By: Dr. Mubarak Shah http://www.cs.ucf.edu/~vision/ Vision: ◦ Vision is the process of discovering, what is present in the world and where it is. perception is the process of acquiring, interpreting, selecting, and organizing sensory information. Visual System: The visual system allows us to assimilate information from the environment. The act of seeing starts when the lens of the EYE focus an image of the outside world onto a light-sensitive membrane in the back of the eye, called the Retina. The retina is actually part of the brain that is isolated to serve as a transducer for the conversion of patterns of light into neuronal signals. Visual System: Color Vision: Color vision is the capacity of an human or machine to distinguish objects based on the frequencies of the light they reflect or emit. The nervous system derives color by comparing the responses to light from the several types of cone photoreceptors in the eye. For humans, the visible spectrum ranges approximately from 380 to 750 nm. Color Vision: A 'red' apple does not emit red light. Rather, it simply absorbs all the frequencies of visible light shining on it except for a group of frequencies that is perceived as red, which are reflected. An apple is perceived to be red, only, because the human Eye can distinguish between different wavelengths. Three things are needed to see image ◦ a light source, ◦ a detector (e.g. the Eye) ◦ a sample to view. Computer Vision: Computer vision is the science and technology of machines that see. Computer Vision is the study of analysis of pictures and videos in order to achieve results similar to those as by human. Computer Vision: Sub-domains of computer vision include: ◦ Image Acquisition ◦ Image restoration ◦ Object recognition ◦ Scene reconstruction. ◦ Event detection and tracking. ◦ Motion and 3-Dimmensional aspects. Related Disciplines: Image processing Computer graphics Pattern recognition Artificial intelligence Applied mathematics Learning Related Disciplines: Steps involved: 1) Image acquisition: A digital image is produced by one or several image sensors. These sensors may include: Light-sensitive cameras Range sensors Tomography devices Radar and ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands Steps involved: Steps involved: 2)Pre-processing: Before a computer vision method can be applied to image data in order to extract some specific piece of information. It is usually necessary to process the data in order to assure that it satisfies certain assumptions implied by the method. Examples may include: ◦ Re-sampling in order to assure that the image coordinate system is correct. ◦ Noise reduction in order to assure that sensor noise does not introduce false information. ◦ Contrast enhancement to assure that relevant information can be detected. Steps involved: 3) Feature extraction: Image features at various levels of complexity are extracted from the image data. Typical examples of such features are Lines and edges. Localized interest points such as corners, blobs or points. More complex features may be related to texture, shape or motion. Steps involved: 4) Detection/Segmentation: At some point in the processing a decision is made about which image points or regions of the image are relevant for further processing. Examples are Selection of a specific set of interest points Segmentation of one or multiple image regions which contain a specific object of interest. Applications Areas of Computer Vision: Law enforcement. Nuclear medicine and Defense. Automatic character recognition. Industrial applications (machine vision). Satellite imagery for weather prediction. Solving problems with machine perception. Enhance the contrast or code the intensity levels into color for easier interpretation. Interpretation of X-rays and other Images used in industry, medicine and biological sciences