Basic Medical Image Processing and Analysis Lesson 5 (PDF)

Summary

This lesson covers the basics of medical image processing and analysis. It details image formation, visualization, analysis, management, and enhancement. The document includes learning outcomes for the students and descriptions of preprocessing and postprocessing techniques.

Full Transcript

BASIC MEDICAL IMAGE PROCESSING AND ANALYSIS RENIEN S. MUYCO, RRT, MSRT LEARNING OUTCOMES: At the end of the discussion, the students will be able to:  understand the basics of image processing and analysis  identify how image processing is done  appreciate the...

BASIC MEDICAL IMAGE PROCESSING AND ANALYSIS RENIEN S. MUYCO, RRT, MSRT LEARNING OUTCOMES: At the end of the discussion, the students will be able to:  understand the basics of image processing and analysis  identify how image processing is done  appreciate the purpose of processing and analysis in providing accurate diagnosis  Distinguished post-processing from pre- processing techniques  Identify the significance of pre and post- processing. The commonly used term "medical image processing" means the provision of digital image processing for medicine. Medical image processing covers five major areas: 1. Image formation includes all the steps from capturing the image to forming a digital image matrix. 2. Image visualization refers to all types of manipulation of this matrix, resulting in an optimized output of the image. 3. Image analysis includes all the steps of processing, which are used for quantitative measurements as well as abstract interpretations of medical images. These steps require a-priori knowledge on the nature and content of the images, which must be integrated into the algorithms on a high level of abstraction. Thus, the process of image analysis is very specific, and developed algorithms can rarely be transferred directly into other domains of applications. 4. Image management encompasses all the techniques that provide the efficient storage, communication, transmission, archiving and access (retrieval) of image data. A simple grayscale radiograph in its original condition may require several megabytes of storage capacity, and compression techniques are applied. The methods of telemedicine are also a part of image management. 5. Image enhancement: in contrast to image analysis, which is also referred to as high-level image processing, low-level processing or image enhancement denotes manual or automatic techniques, which can be realized without a-priori knowledge on the specific content of images. This type of algorithm has similar effects regardless of what is shown in an image. PREPROCESSING  Preprocessing takes place in the computer where the algorithms determine the image histogram.  Post-processing is done by the technologist through various user functions.  Digital preprocessing methods are vendor-specific. 5 CAVA: COMPUTER AIDED VISUALIZATION AND ANALYSIS Definition:  The science of underlying computerized methods of image processing, analysis, and visualizations to facilitate new therapeutic strategies, basic clinical research, education, and training. CAD: COMPUTER AIDED DIAGNOSIS Definition:  The science of underlying computerized methods of image processing, analysis for the diagnosis of diseases via images. CAVA/CAD OPERATIONS 1. Pre (Image) Processing  For enhancing information about and defining object system. 2. Visualization For viewing and comprehending object system 3. Manipulation  For altering object system (virtual surgery) 4. Analysis  For quantifying information about object system. TERMINOLOGIES  Object – An entity that is imaged and studied. May be rigid, deformable, static, or dynamic, physical or conceptual.  Object system – A collection of related objects.  Scanner – Any imaging device-real or conceptual.  Body region – The support region of the imaged object system.  Voxels (3D) – Cuboidal elements into which body region is digitized by the imaging device. TERMINOLOGIES  Scene – Multidimensional (2D, 3D, 4D, …) image of the body regions; S=(C,f)  Scene Domain - Rectangular array of voxels on which scene is defined; C  Scene Intensity – Values assigned to voxels; f(c)  Binary Scene – A scene with intensities 0 and 1 only.  Structure – Geometric representation of an object in the object system derived from scenes. TERMINOLOGIES  Structure System – A collection of structures representing the objects in an object system  Rendition – A 2D image depicting a structure system  Body Coordinate System – A coordinate system associated with the imaged body region  Scanner Coordinate System – A coordinate system affixed to the scanner  Scene Coordinate System – A coordinate system affixed to the scene  Structure Coordinate System – A coordinate system determined for structures  Display Coordinate System – A coordinate system associated with the display device COORDINATE SYSTEMS PRE-MEDICAL IMAGE PROCESSING Image Reconstruction Background Removal Noise Removal Image Compression  a mathematical process that generates tomographic images from x-ray projection data acquired at many different angles around the patient  Removal of the white unexposed borders results in an overall smaller number of pixels.  This reduces the amount of information to be stored. 18  Unexposed borders around the collimation edges allow excess light to enter the eye.  Effect is known as veil glare.  Glare causes over sensitization of a chemical within the eye called rhodopsin.  This results in temporary white light blindness. 19  Radiographic Noise  a fluctuation in optical density on radiographic or mammographic images  often as a result of low radiation dose  Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space.  Why?  growing need for storage  efficient data transmission  teleradiology applications  PACS POST-MEDICAL IMAGE PROCESSING Filtering Contraction and Enhancement Registration Classification, Texturing and Segregation POST-MEDICAL IMAGE PROCESSING Describes the manipulation of radiographic images to derive additional qualitative or quantitative data. Modern imaging devices and protocols, whether in CT, MRI, or ultrasound, generate large volumes of information that enhance not only our diagnostic roles, but treatment planning as well. POST - MEDICAL IMAGE PROCESSING Film based is NOW obsolete (Seeram, 2008). POST - MEDICAL IMAGE PROCESSING Goal: To alter or change an image to enhance diagnostic interpretation. POST - MEDICAL IMAGE PROCESSING IMAGE DOMAIN CONCEPT Two domains 1. Spatial domain üradiography and CT üTechniques are based on direct manipulation of pixels in an image. 2. Spatial frequency domain ü MRI ü Techniques are based on modifying the Fourier transform of an image. SPATIAL LOCATION DOMAIN matrix of pixels matrix two dimensional array of numbers SPATIAL FREQUENCY DOMAIN frequency  number of cycles per unit length: that is, the number of times a signal changes per unit length. Frequency wherein  small structures within an object produce high frequencies > detail in the image  while large structures produce low frequencies > contrast information in the image CLASSES OF IMAGE POST-PROCESSING Image restoration  improve the quality of images that have distortions and degradations CLASSES OF IMAGE POST-PROCESSING Image Analysis  allows measurements as well as image segmentation, feature extraction and classification of objects CLASSES OF IMAGE POST-PROCESSING Image synthesis  create images from other images or non-image data. CLASSES OF IMAGE POST-PROCESSING Image Enhancement  to generate an image that is more pleasing to the observer  contrast enhancement, edge enhancement, spatial and frequency filtering, and noise reduction CLASSES OF IMAGE POST-PROCESSING  Image Compression  reduce the size of the image to decrease transmission time and to reduce storage space. CONCEPT OF WINDOWING A digital image is made up of range of numbers window width (WW) - range of numbers window level (WL) - center of the range WINDOW AND LEVEL  Window and level are the most common controls for brightness and contrast.  Window controls how light or dark the image is.  Level controls the ratio of black to white, or contrast.  User is able to manipulate quickly through use of the mouse. 41 WINDOW AND LEVEL  One direction, vertical or horizontal, controls brightness, and the other direction, contrast.  To control density and contrast further, contrast enhancement parameters are used. 42 TYPES OF IMAGE COMPRESSION  lossy compression  loss of image details when the image is decompressed  ratio = 100:1  lossless compression  there is no loss of any information in the image in the image  ratio = 5:1  Purpose: To suppress unwanted (non-object) info. To enhance wanted (object) information.  Enhancive: For enhancing edges, regions. For intensity scale standardization. For correcting background variation.  Suppressive: Mainly for suppressing random noise. FILTERING  It is an operation that changes the observable quality of an image, in terms of :  Resolution  Contrast  noise Typically, filtering involves applying the same or similar mathematical operation at every pixel in an image for example, spatial filtering modifies the intensity of each pixel in an image using some function of the neighboring pixels Filtering is one of the most elementary image processing operations EDGES  One of the main applications of image processing and image analysis is to detect structures of interest in images  In many situations, the structure of interest and the surrounding structures have different image intensities  By searching for discontinuities in the image intensity function, we can find the boundaries of structures of interest  these discontinuities are called edges  for example, in an X ray image, there is an edge at the boundary between bone and soft tissue EDGE DETECTION  Edge detection algorithms search for edges in images automatically  Because medical images are complex, they have very many discontinuities in the image intensity  most of these are not related to the structure of interest  may be discontinuities due to noise, imaging artefacts, or other structures  Good edge detection algorithms identify edges that are more likely to be of interest  However, no matter how good an edge detection algorithm is, it will frequently find irrelevant edges  edge detection algorithms are not powerful enough to completely automatically identify structures of interest in most medical images  instead, they are a helpful tool for more complex segmentation algorithms, as well as a useful visualization tool INAPPROPRIATE USE OF ENHANCEMENT METHODS  Enhancement methods themselves may increase noise while improving contrast!  They may eliminate small details and edge sharpness while removing noise  They may produce artifacts in general.  Often in medical image analysis, we have to process information from multiple images  images with different modalities (CT, PET, MRI) from the same subject  images acquired at different time points from a single subject  images of the same anatomical regions from multiple subjects  In all these, and many other situations, we need a way to find and align corresponding locations in multiple images  Image registration is a field that studies optimal ways to align and normalise images CHARACTERIZATION OF IMAGE REGISTRATION PROBLEMS  There are many different types of image registration problems  They can be characterized by two main components  the transformation model  the similarity metric IMAGE SEGMENTATION WHAT IS IMAGE SEGMENTATION?  Image Segmentation is the process of isolating objects of interest from the rest of the scene. (Castleman )  Image segmentation is the process of partitioning an image into non-intersecting region such that each region is homogeneous and the union of no two adjacent regions is homogeneous. (Pal )  Image segmentation is to divide an image into parts that have a strong correlation with objects or areas of the real world contained in the image. (Watt )  The purpose of image segmentation is to partition an image into meaningful regions with respect to a particular application.  The segmentation is based on measurements taken from the image and might be grey level, colour, texture, depth or motion.  Usually image segmentation is an initial and vital step in a series of processes aimed at overall image understanding  Applications of image segmentation include  Identifying objects in a scene for object-based measurements such as size and shape  Identifying objects in a moving scene for object-based video compression (MPEG4)  Identifying objects which are at different distances from a sensor using depth measurements from a laser range finder enabling path planning for a mobile robots SEGMENTATION BASED ON GREYSCALE SEGMENTATION BASED ON TEXTURE SEGMENTATION BASED ON MOTION  The main difficulty of motion segmentation is that an intermediate step is required to (either implicitly or explicitly) estimate an optical flow field  The segmentation must be based on this estimate and not, in general, the true flow SEGMENTATION BASED ON DEPTH  This example shows a range image, obtained with a laser range finder  A segmentation based on the range (the object distance from the sensor) is useful in guiding mobile robots SEGMENTATION TECHNIQUES TWO VERY SIMPLE IMAGE SEGMENTATION TECHNIQUES THAT ARE BASED ON THE GREYLEVEL HISTOGRAM OF AN IMAGE  Thresholding  Clustering THRESHOLDING  One of the widely methods used for image segmentation. It is useful in discriminating foreground from the background. By selecting an adequate threshold value T, the gray level image can be converted to binary image. 5 THRESHOLDING TECHNIQUES  MEAN TECHNIQUE- This technique used the mean value of the pixels as the threshold value and works well in strict cases of the images that have approximately half to the pixels belonging to the objects and other half to the background.  P-TILE TECHNIQUE- Uses knowledge about the area size of the desired object to the threshold an image.  HISTOGRAM DEPENDENT TECHNIQUE (HDT)- separates the two homogonous region of the object and background of an image.  EDGE MAXIMIZATION TECHNIQUE (EMT)- Used when there are more than one homogenous region in image or where there is a change of illumination between the object and its background.  VISUAL TECHNIQUE- Improve people’s ability to accurately search for target items. CLUSTERING  Defined as the process of identifying groups of similar image primitive.  It is a process of organizing the objects into groups based on its attributes.  An image can be grouped based on keyword (metadata) or its content (description)  KEYWORD- Form of font which describes about the image keyword of an image refers to its different features  CONTENT- Refers to shapes, textures or any other information that can be inherited from the image itself. CLUSTERING SEGMENTATION APPROACHES WATER BASED SEGMENTATION  Steps: 1. Derive surface image:  A variance image is derived from each image layer. Centred at every pixel, a 3x3 moving window is used to derive its variance for that pixel. The surface image for watershed delineation is a weighted average of all variance images from all image layers. Equal weight is assumed in this study. 2. Delineate watersheds  From the surface image, pixels within a homogeneousregion form a watershed 3. Merge Segments  Adjacent watershed may be merged to form a new segmentwith larger size according to their spectral similarity and agiven generalization level REGION-GROW APPROACH  This approach relies on the homogeneity of spatially localized features  It is a well-developed technique for image segmentation. It postulates that neighboring pixels within the same region have similar intensity values.  The general idea of this method is to group pixels with the same or similar intensities to one region according to a given homogeneity criterion. EDGE-BASED METHODS  Edge-based methods center around contour detection: their weakness in connecting together broken contour lines make them, too, prone to failure in the presence of blurring. CONNECTIVITY-PRESERVING RELAXATION- BASED METHOD  Referred as active contour model  The main idea is to start with some initial boundary shape represented in the form of spline curves, and iteratively modify it by applying various shrink/expansion operations according to some energy function.  Partial Differential Equation (PDE) has been used for segmenting medical images

Use Quizgecko on...
Browser
Browser