Medical Imaging: Resumos 1º Teste PDF

Summary

This document summarizes key concepts in medical imaging, including monochrome and RGB image representations, brightness profiles, and various imaging modalities. It also details image processing techniques and applications in medical contexts. The text also highlights essential concepts in digital image analysis.

Full Transcript

Resumos 1º Teste PPT ME_1 Medical imaging is a critical field in biomedical engineering, designed to visualize internal body structures and functions for diagnosis, monitoring, and treatment planning. It involves several techniques, each with its strengths, limitati...

Resumos 1º Teste PPT ME_1 Medical imaging is a critical field in biomedical engineering, designed to visualize internal body structures and functions for diagnosis, monitoring, and treatment planning. It involves several techniques, each with its strengths, limitations, and applications. Introduction to Medical Imaging The evolution of medical imaging has shifted diagnostics from qualitative to quantitative, allowing evidence-based medical decisions. Modern imaging technologies combine microelectronics and computer advancements, making early detection and precise monitoring possible. Revolution in medical diagnosis: Advances in microelectronics and computer science Development of tissue imaging technology Qualitative diagnosis -> quantitative diagnosis “Evidence-based medicine” Monochrome Image as a 2D Function A monochrome (grayscale) image is represented as a 2D function f(x,y)f(x, y), where xx and yy denote spatial coordinates, and ff represents the pixel intensity at a given location. The pixel values range between minimum and maximum intensity levels, typically 0 to 255 in an 8-bit grayscale image, where: 0: Represents black. 255: Represents white. Intermediate values depict varying shades of grey. This 2D representation captures spatial relationships and variations in brightness, enabling the visualization of patterns or objects. Image Brightness Profile The brightness profile of an image represents the variation of pixel intensities along a specific line or region of the image. For example: If you select a horizontal or vertical line in the image, the brightness profile is a graph of intensity values along that line. This profile is essential for analyzing spatial intensity changes, which can highlight edges, patterns, or transitions in the image. Applications: Edge detection. Analysis of texture or uniformity. Assessment of object boundaries. RGB Color Image An RGB image comprises three components: Red (R), Green (G), and Blue (B). Each pixel is defined by three intensity values corresponding to these components, allowing for a vast array of colors through additive mixing: Red + Green = Yellow. Red + Blue = Magenta. Green + Blue = Cyan. Red + Green + Blue = White. RGB images are typically represented as a 3D array where each layer corresponds to one color channel. The intensity of each channel is usually stored in 8 bits, leading to 224 possible color combinations. RGB Color Image - Color Component Profiles The color component profiles refer to the individual distributions of red, green, and blue intensities across an image. These profiles provide insight into the contribution of each primary color at various spatial locations. Separation of Channels: By isolating R, G, and B channels, the intensity variation across the image for each color is analyzed. Visualization: This can be visualized as separate grayscale images for R, G, and B. Applications: o Detecting dominant colors in specific regions. o Color-based segmentation and analysis. Key characteristics: Bit-depth: Each color channel is often coded with 8 bits, resulting in 256 levels per channel. Total colors: The combination allows for 224=16,777,2162^{24} = 16,777,216224=16,777,216 unique colors. Advantages: High visual fidelity and versatility in representing real- world colors. 1. Representation and Storage: o Monochrome images require less storage than RGB images due to their single intensity channel. o RGB images require more computational and storage resources because of their multi-channel nature. 2. Conversion Between Monochrome and RGB: o RGB images can be converted to grayscale by applying a weighted sum to the color components, typically: fgrey = 0,2989*R + 0,5870*G + 0,1140*B. 3. Applications in Medical Imaging: o Monochrome images are commonly used in modalities like X-rays and CT scans. o RGB and color profiles are essential in endoscopic imaging and pathology to highlight tissue characteristics. Digital Image A digital image is represented as a collection of discrete picture elements, or pixels, arranged in a 2D matrix. Each pixel carries numerical data representing the intensity (for grayscale images) or the color (for RGB images). Key processes in digital imaging: 1. Discretization: Divides the image into a grid of pixels. 2. Quantization: Assigns specific intensity levels to pixels based on the signal detected. Digital Image as a Pixel Array A digital image is mathematically described as f(x,y)f(x, y)f(x,y), where xxx and yyy are spatial coordinates, and fff is the intensity value at that location. For grayscale images, this results in a 2D matrix of dimensions M×NM \times NM×N: MMM: Number of rows. NNN: Number of columns. Each pixel assumes a nonnegative value, typically within a limited range, such as 0–255 for an 8-bit image. In color images, each pixel has multiple components (e.g., RGB values). The structure of a color image includes: A 3D array: Three 2D arrays for the red, green, and blue components. Each component is separately quantized. 3D Imaging Three-dimensional imaging is often achieved through the combination of multiple two-dimensional images obtained via imaging techniques like CT and MRI, which can produce complex volume data useful in understanding anatomical structures more comprehensively. Color-Indexed Image A color-indexed image uses a palette (or lookup table) to map index values stored in the image to specific colors. Structure: o The image matrix contains indices (e.g., 0, 1, 2, etc.). o A color palette maps each index to an RGB triplet (R,G,B)(R, G, B)(R,G,B). Advantages: o Efficient storage, especially for images with limited color variations. Limitations: o Not as flexible as RGB for displaying high-detail or continuous- tone images. Computer Vision Systems Overview A computer vision system replicates human visual perception using computational techniques to analyze and interpret visual data. Core Components of a Vision System Image Acquisition: o Capturing images via sensors (e.g., cameras, scanners). o Importance of resolution and dynamic range. Preprocessing: o Enhancing image quality by removing noise or normalizing contrast. Segmentation: o Partitioning an image into meaningful regions or objects. o Techniques: thresholding, edge detection, and clustering. Feature Extraction: o Identifying key characteristics of an image (e.g., shapes, textures). Analysis and Interpretation: o Algorithms for pattern recognition, classification, and decision- making. Linear Filters Purpose: Modify pixel values based on a weighted average of surrounding pixels. Types: o Smoothing Filters: Reduce noise but may blur edges. o Sharpening Filters: Enhance edges and details. o Edge Detectors: Identify boundaries between different regions. Examples: o High-pass filters for sharpening. o Low-pass filters for noise reduction. Nonlinear Filters Purpose: More robust against outliers, focus on preserving edges. Types: o Median Filters: Replace each pixel value with the median of its neighborhood, ideal for removing "salt-and-pepper" noise. o Rank Filtering: Select pixel intensity based on rank-order statistics. Applications in Medical Imaging: Noise Reduction: Essential in modalities like MRI and ultrasound, where noise is prominent. Edge Preservation: Maintains the integrity of anatomical structures during image enhancement. Feature Detection: Improves accuracy in recognizing critical patterns like lesions or organ boundaries. Types of Filters: High-frequency filters: These filters reduce noise and give the image a smoother appearance. Low-frequency filters (high-pass filters): These enhance edges in the image, making the image appear sharper by suppressing low- frequency data and allowing high frequencies to pass. Brodatz Textures: A standard dataset of grayscale texture images, widely used for research in image processing, segmentation, and classification tasks. Examples include patterns like wood grains, fabrics, and natural textures. Segmentation: The process of dividing an image into regions with similar texture patterns. Used in applications such as object detection, medical imaging (e.g., distinguishing tissue types), and pattern recognition. Segmentation Techniques: Feature-Based Segmentation: Uses statistical measures (e.g., entropy, contrast) to identify regions with homogeneous textures. Edge-Based Segmentation: Detects boundaries where texture changes abruptly. Clustering: Groups similar pixels into texture-specific clusters. Applications in Medical Imaging: Differentiating between types of tissues or abnormalities based on texture (e.g., tumours vs. normal tissue). Image Processing and Analysis Image processing in medical imaging is integral for enhancing image quality, extracting important features, and facilitating efficient interpretation. Techniques include: Filtering Techniques: Both linear and non-linear filters are employed to smooth images, reduce noise, or enhance features such as edges. Segmentation: This process divides an image into meaningful regions for further analysis, using methods like k-means clustering or multilayer perceptrons. 2. Key Imaging Modalities 2.1 Radiography (X-ray Imaging) Discovery: Wilhelm Röntgen, 1895. Principle: X-rays penetrate tissues to form images on film or digital receptors. Applications: Orthopedics, mammography, dentistry, and pulmonology. Advantages: o Low-cost equipment. o Portable and widely available. Diagnosis: o breast cancer (mammography) o osteoporosis Limitations: o Uses ionizing radiation. o Limited in imaging soft tissues. 2.2 Computed Tomography (CT) Principle: Combines multiple X-ray images to create cross-sectional 3D views. Applications: o Neurology: Brain tumor detection. o Pulmonology: Lung diseases. o Oncology: Tumor staging o Cardiology o Gastroenterology: kidney, liver diseases Advantages: o High-resolution images. o Effective for detailed anatomical visualization. Limitations: o High equipment cost. o Significant radiation exposure. 2.3 Magnetic Resonance Imaging (MRI) Principle: Uses magnetic fields and radio waves to detect hydrogen atoms in tissues. Applications: o Neurology: Brain imaging. o Cardiology: Vessel analysis. o Oncology: Tumors in soft tissues. o angiography o gastroenterology: abdomen organs o Orthopedics: osteoporosis Advantages: o Non-invasive and non-ionizing. o Exceptional soft tissue contrast. Special Techniques: o Functional MRI (fMRI): Maps brain activity. Limitations: o High cost and stationary equipment. Functional Magnetic Resonance Imaging (fMRI) fMRI measures brain activity by detecting changes in blood oxygenation levels, leveraging the differences in magnetic properties between oxygenated and deoxygenated hemoglobin. Key Concepts: o Oxygenated Hemoglobin (Oxyhemoglobin): Diamagnetic (weakly repelled by a magnetic field). o Deoxygenated Hemoglobin (Deoxyhemoglobin): Paramagnetic (creates small local distortions in the magnetic field). Mechanism of Signal Change: o During neuronal activity, localized blood flow increases, delivering more oxygenated blood to active brain regions. o This results in a net decrease in deoxyhemoglobin, reducing magnetic field inhomogeneities and increasing the fMRI signal in T2- and T2*-weighted images. Biological Basis of the Signal: o Neurovascular Coupling: Neuronal activity increases cerebral blood flow and volume, but oxygen extraction remains relatively constant or increases minimally. o Blood-Oxygen-Level Dependent (BOLD) Contrast: The BOLD signal arises due to reduced deoxyhemoglobin concentration, which minimizes signal loss from intravoxel dephasing, thereby enhancing signal intensity in activated brain regions. Temporal and Spatial Resolution: o Temporal Resolution: ▪ Limited by the delay between neuronal activation and changes in the BOLD signal (typically a few seconds). ▪ Relates to the vascular transit times, where oxygenated blood flows into the capillary bed and venous circulation. o Spatial Resolution: Limited by the volume of tissue with reduced deoxyhemoglobin concentration, influenced by vascular anatomy. Clinical and Research Applications: o Presurgical Mapping: ▪ Identifies functional areas of the brain (e.g., motor and language regions) to minimize damage during procedures like tumor resection. ▪ Example: Mapping areas near a tumor to avoid impairing speech or motor functions. o Stroke and Neurovascular Event Evaluation: ▪ Used to assess functional recovery and reorganization of brain activity post-stroke or traumatic brain injury. o Cognitive and Behavioral Studies: ▪ Revolutionized understanding of brain function, allowing precise localization of neural activity during cognitive, motor, and sensory tasks. ▪ Applications include studying learning, development, memory, and decision-making. Key Example: A functional MR study evaluates motor, memory, and visual processing during tasks, such as: o Moving a finger in response to a cursor. o Watching cursor movements. o Subtraction imaging isolates brain areas involved specifically in visual processing. 2.4 Ultrasonography (USG) Principle: High-frequency sound waves generate real-time images. Applications: o Obstetrics: Fetal monitoring. o Cardiology: Blood flow analysis. o Urology: Prostate examination. o Gastrology Diagnosis: o prostate, o urinary bladder o uterus Advantages: o Safe, portable, and non-ionizing. o Cost-effective. Limitations: o Operator-dependent, difficult for interpretation. o Limited resolution, low image quality. 2.5 Nuclear Medicine (PET, SPECT) Principle: Involves radiotracers that emit gamma rays to visualize metabolic processes. analysis of molecular changes, often together with CT, Applications: o Oncology: Early tumor detection. o Neurology: Alzheimer's and Parkinson's disease. Huntington, o almost all medical specialties Advantages: o High sensitivity for functional imaging. o short examination time (limited by half-life disintegration of radioisotope), Limitations: o Involves radiation. o Expensive equipment and tracers. o high equipment price 2.6 Endoscopy Principle: Uses cameras and light sources for direct internal visualization. image processing is necessary. optical images of internal organs, additional surgical intervention (laparoscopy), endoscopic capsules. Applications: o gastrointestinal tract (stomach, intestine, colon) o respiratory tract o urinary tract o Laparoscopy: removal of the gallbladder, polyp,… Advantages: o Direct imaging of structures. Limitations: o Invasive. o Requires skilled operators. o high equipment price 2.7 Thermography Principle: Infrared radiation is used to detect temperature variations. Applications: o Breast cancer detection. o Vascular disorders. Advantages: o Non-invasive o non-ionizing o low cost o mobility Limitations: o Low resolution. o Typically complementary to other methods. PPT ME_2 Image Quality in Medical Imaging The degree to which the image achieves its purpose, or demonstrates that no disease or injury is present, is described by the vague term image quality. In part, image quality connotes how clearly the image displays information about the anatomy, physiology, and functional capacity of the patient, including alterations in these characteristics caused by disease or injury. This component of image quality is referred to as the clarity of information in the image. Image quality depends on other features also, including whether the proper areas of the patient are examined, whether the correct images are obtained, and even whether a disease or injury is detectable by imaging. Image clarity is a measure of how well information of interest is displayed in an image. Clarity is influenced by four fundamental characteristics of the image: unsharpness, contrast, noise, and distortion and artifacts. In any image, the clarity of information is affected by these image properties and how they interact with each other. Unsharpness, also known as blur, refers to the fuzziness or blurring of well-defined boundaries in an image. This occurs because all imaging processes introduce some level of distortion or loss of sharpness in the final image. Unsharpness is measurable and can be quantified in imaging systems. Components of Unsharpness: Unsharpness results from four contributing factors, which collectively determine the overall blur: Geometric Unsharpness (Ug): A consequence of the geometry of the image formation process. o This type of unsharpness is influenced by the size of the radiation source (e.g., focal spot size), and the distances between the radiation source, the object (patient), and the image receptor. o Factors influencing geometric unsharpness: 1. Focal spot size: A larger focal spot results in greater unsharpness. 2. Object-to-receptor distance: Increasing the distance between the object and the receptor increases the blur. 3. Source-to-object distance: Moving the radiation source farther from the object reduces geometric unsharpness. o Formula for Geometric Unsharpness: 𝐹𝑜𝑐𝑎𝑙𝑆𝑝𝑜𝑡𝑆𝑖𝑧𝑎 ∗ (𝑂𝑏𝑗𝑒𝑐𝑡 − 𝑡𝑜 − 𝑟𝑒𝑐𝑒𝑝𝑡𝑜𝑟𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒) 𝑈𝑔 = 𝑆𝑜𝑢𝑟𝑐𝑒 − 𝑡𝑜 − 𝑜𝑏𝑗𝑒𝑐𝑡𝐷𝑖𝑠𝑡𝑎𝑛𝑐𝑒 o Example: For a radiographic procedure with a 2-mm focal spot, an object-to-receptor distance of 25 cm, and a source-to- receptor distance of 100 cm, the geometric unsharpness is calculated as: 2𝑚𝑚 ∗ 25𝑐𝑚 𝑈𝑔 = = 0,67𝑚𝑚 75𝑐𝑚 o Penumbra Effect: The area of unsharpness around the edges of the object due to the focal spot's size is referred to as the "penumbra" or "edge gradient". Subject Unsharpness (Us): Due to the structure of the object being imaged. o Subject unsharpness occurs because not all structures within the object present sharp boundaries that can be imaged clearly. o This type of unsharpness arises from gradual variations in tissue characteristics, like density or composition, or from the shape of the object itself. o Absorption Unsharpness: This is a type of subject unsharpness where the image quality is affected by how the x-rays are absorbed by different materials in the object. Motion Unsharpness (Um): Caused by movement during image capture. o Involuntary or voluntary motion during the imaging process results in blurring. This motion can cause the boundaries of structures to appear spread out over different regions of the image. o Involuntary motion, like heartbeats or peristalsis, often causes significant unsharpness, especially in fast-moving anatomical structures. o Reducing motion unsharpness: ▪ Use of shorter exposure times to minimize the time for motion to affect the image. ▪ Patient cooperation (asking the patient to remain still). ▪ Physical restraints or sedation in extreme cases. Receptor Unsharpness (Ur): Introduced by the image receptor (e.g., film, detector). o Receptor unsharpness arises due to the characteristics of the image receptor, including thickness and composition of the screen or detector. o A thicker receptor improves sensitivity but also increases unsharpness. o Trade-offs: For example, using fast intensifying screens reduces exposure time but can increase receptor unsharpness. The total unsharpness Ut is calculated as the square root of the sum of squares of each component: 𝑈𝑡 = √𝑈𝑔 2 + 𝑈𝑠 2 + 𝑈𝑚 2 + 𝑈𝑟 2 Unsharpness in Various Imaging Modalities: o Nuclear Medicine: Unsharpness occurs due to finite collimator hole sizes. o CT Imaging: Geometric unsharpness depends primarily on the focal spot size and collimator characteristics. o MRI: Unsharpness is influenced by signal localization, which depends on the spatial and temporal control of magnetic fields. o Ultrasound: Affected by factors such as the beam width and the distance between the transducer and reflective surfaces. Every radiographic procedure involves balancing geometric, motion, and receptor unsharpness. The goal is to minimize total unsharpness by optimizing exposure parameters, equipment settings, and imaging techniques. Image Contrast Image contrast refers to the ability of an imaging technique to distinguish subtle differences in features or structures within the object (e.g., a patient). It determines how well adjacent regions with varying characteristics are differentiated in the final image. Types of Contrast o Intrinsic Contrast: Intrinsic contrast (or subject contrast) is derived from the physical and physiological differences in the object being imaged. Influencing factors: ▪ Atomic number and density (e.g., differences in tissue composition in radiography). ▪ Tissue properties (e.g., proton density, relaxation times in MRI). ▪ Thickness of structures and their interactions with the imaging method. Examples: ▪ High intrinsic contrast: Chest radiographs due to significant density differences between the lungs and ribs. ▪ Low intrinsic contrast: Mammograms, where subtle differences in soft tissue composition require careful imaging techniques. o Imaging Technique and Operator Choices: The contrast is influenced by the parameters and techniques chosen during imaging. Examples in radiography: ▪ Low kVp (peak kilovoltage): Produces a "soft" x-ray beam, enhancing subtle differences in tissues (e.g., mammography). ▪ High kVp: Suppresses differences in dense structures like bones to reveal underlying soft tissues (e.g., in chest radiography). MRI Influence: Pulse sequences (e.g., T1, T2, FLAIR) can selectively highlight different tissue properties to adjust contrast based on the diagnostic need. o Contrast Agents: Contrast agents are substances introduced into the body to improve visualization by altering signal intensity in specific regions. Types of Contrast Agents: ▪ Iodine-based agents: Enhance x-ray and CT imaging by increasing attenuation in blood vessels (e.g., angiography). ▪ Barium compounds: Used in gastrointestinal imaging (e.g., barium swallow or enema). ▪ Microbubbles: Employed in ultrasound for enhanced imaging of blood flow. ▪ Gadolinium: Commonly used in MRI to affect tissue relaxation times and improve contrast. Applications: ▪ Myelography for the spinal cord. ▪ Angiography for blood vessels. ▪ CT urography for urinary tract studies. o Receptor Contrast: Receptor contrast depends on the image receptor (e.g., film, digital detector, or video monitor). Film Radiography: ▪ High-contrast films produce steep characteristic (H–D) curves but have narrow exposure latitude. ▪ Low-contrast films provide wider exposure latitude for a less steep H–D curve. Digital Imaging: ▪ Greater flexibility in adjusting contrast post-acquisition. ▪ Dynamic range adjustments allow enhanced visualization of subtle features or broad exposure ranges. Key Influences on Image Contrast o Imaging Modality and Physics: ▪ In CT and radiography, contrast depends on tissue density and the energy of the x-ray beam. ▪ In MRI, contrast depends on tissue properties like proton density and relaxation times. o Display Adjustments: Digital imaging systems allow for contrast modification via windowing: ▪ Narrow windows enhance contrast by restricting the range of displayed intensity values. ▪ Wide windows reduce contrast but reveal broader intensity ranges. Practical Applications o Diagnostic Accuracy: High contrast improves the detection of small lesions or subtle abnormalities. o Personalized Imaging: Tailoring contrast settings for specific clinical purposes enhances diagnostic precision. Distortions & Artifacts in Imaging Image Distortion: Distortion occurs when the relative magnification of different parts of the object being imaged is unequal, leading to changes in the shape or proportions of structures in the image. o Cause in Radiography: ▪ Objects closer to the source are magnified more than objects farther away. ▪ Unequal distances between the source, object, and receptor create a nonuniform perspective in the image. o Calculation of Magnification (M): Magnification is the ratio of the image size to the object size: ▪ Example: If an image size is 10 cm, the source-to-receptor distance is 100 cm, and the source-to-object distance is 80 cm: Image Artifacts:Artifacts are undesired distortions or features in an image that do not represent actual anatomical or physiological structures. o Causes: ▪ Technical Issues: Equipment imperfections or errors during data acquisition. ▪ Movement: Patient motion or movement of anatomical structures. ▪ External Influences: Metallic objects, nonuniform magnetic fields, or environmental interferences. o Examples of Artifacts: ▪ Radiographic Artifacts: Wrinkles or pressure marks on film. ▪ Ultrasound Artifacts: Reverberation artifacts caused by multiple sound wave reflections. ▪ CT Artifacts: Ring artifacts due to imbalanced detectors, or streaks caused by dense structures like metal implants. ▪ MRI Artifacts: Nonuniformities in the magnetic field due to metallic objects or bridgework. Practical Considerations: o Managing Distortions: ▪ Adjust object placement relative to the source and receptor to reduce magnification discrepancies. ▪ Use equipment that minimizes geometric distortion. o Recognizing and Handling Artifacts: ▪ Artifacts are often easily recognizable, allowing for compensation or correction during interpretation. ▪ Advanced imaging techniques, such as software corrections, can reduce the impact of artifacts in modalities like CT or MRI. Image Noise Image noise refers to irrelevant or extraneous information present in an image that interferes with the visualization of critical features for diagnosis. It can obscure or distort details in medical images, reducing diagnostic accuracy. Noise is commonly quantified using the signal-to- noise ratio (SNR): Components of Image Noise: Structure Noise: o Caused by anatomical structures irrelevant to the purpose of the image. o Example: Rib shadows in a chest X-ray that obscure lung lesions. o Solution: Techniques like tomography or CT automatically reduce structure noise by focusing on a single plane of interest. Radiation Noise: o Results from non-uniformity in the radiation beam or scattered radiation. o Examples: ▪ The heel effect in X-rays, where beam intensity varies across the field. ▪ Scatter from the patient’s body contributing irrelevant signals. Solutions: o Grids in radiography to block scattered radiation. o Pulse-height analyzers in nuclear medicine to filter out scattered photons. Receptor Noise: o Comes from imperfections or variations in the sensitivity of the image receptor. o Examples: ▪ Contamination on intensifying screens. ▪ Non-uniform sensitivity of surface coils in MRI. ▪ Unbalanced detectors in nuclear medicine. o Solutions: ▪ Regular calibration and quality control of detectors. ▪ Use of clean and well-maintained receptors. Quantum Mottle: o Caused by statistical variations in the number of photons or information carriers used to form the image. o Effect: Most visible in low-contrast images or when fewer photons are used (e.g., low radiation dose). o Solution: ▪ Increase the number of photons (at the cost of longer imaging times or higher radiation dose). Examples of Noise in Modalities: Radiography: Rib shadows or scatter affecting lung imaging. Nuclear Medicine: Scattered gamma rays and unbalanced detectors. Ultrasound: Speckle patterns caused by sound wave scattering. MRI: Noise introduced by surface coils or non-uniform fields. Strategies to Reduce Noise: Improved Equipment: o High-quality detectors with uniform sensitivity. o Optimized reconstruction algorithms in CT, MRI, or SPECT. Optimized Imaging Parameters: o Use appropriate exposure settings to balance signal and noise. o Employ grids or collimators to minimize scatter. Digital Processing: o Post-processing techniques to filter and enhance image quality. Gaussian Noise: Definition: Gaussian noise is a type of random noise with a normal (Gaussian) distribution. Each pixel’s intensity is altered by adding a value drawn from a Gaussian distribution with a specified mean (often zero) and variance. Sources: Often arises in imaging systems due to thermal or electronic noise in the detectors. Impact: Reduces image clarity and introduces randomness in the signal, making fine details harder to detect. Mitigation: Can be reduced using low-pass filters or Gaussian smoothing. Rician Noise: Definition: Specific to magnitude MR images, Rician noise occurs due to the nonlinearity in signal processing when complex MRI data are transformed into magnitude images. Characteristics: o Dominates in low signal-to-noise ratio (SNR) areas, particularly in regions with weak MR signals. o Unlike Gaussian noise, Rician noise is not symmetric—it shifts the mean intensity upwards in low-SNR regions, creating a bias. Impact: Affects quantitative measurements in MRI and complicates detection of low-contrast structures. Mitigation: Advanced denoising techniques like non-local means filtering or methods that specifically account for the Rician distribution. Key Points to Relate: Gaussian Noise: Common in radiography and other detector-based systems. Rician Noise: Dominates in MRI, particularly in low-intensity regions. Impact: Both noise types obscure details and reduce diagnostic clarity. Management: Requires tailored filters and post-processing techniques based on the noise type. Image Degradation Model Definition: Represents how an image is degraded during acquisition due to various factors like noise, motion blur, or system imperfections. Mathematical Model: The degraded image g(x,y) can be expressed as: Technical Parameters The technical aspects of imaging play a vital role in determining image quality. For instance, the signal-to-noise ratio (SNR) serves as a quantitative measure of how much an image is compromised by noise. Signal-to-Noise Ratio (SNR): Measures the quality of an image by comparing the strength of the signal to the level of noise.The SNR can be computed using the formula where the ratio of signal power to noise power helps assess overall image clarity. Importance: High SNR = clear image with minimal noise interference; Low SNR = noisy image, harder to interpret. Image Degradation and Restoration Techniques In practice, images are often subjected to degradation through noise and other factors. One method to mitigate this degradation is through Wiener filtering, a technique used to restore the original image by statistically minimizing the noise. Image Restoration: Process of recovering a degraded image to its original state by reversing the effects of noise and blur. Common Techniques: o Inverse Filtering: Assumes knowledge of the degradation function h(x,y)h(x, y)h(x,y) and applies its inverse to recover the original image. o Wiener Filtering: Balances noise reduction and deblurring using statistical models for the image and noise. Point Spread Function (PSF) The Point Spread Function (PSF) describes how a point source of light is represented in an imaging system and is crucial in determining the minimum distance at which two objects can be distinctly recognized. It quantifies the system’s response and shows how an ideal point source is blurred in the resulting image. PSF is central to understanding image sharpness and resolution. The PSF is a mathematical model that describes how an imaging system processes an infinitesimally small point object (idealized as a point source). Narrow, sharply peaked PSF: Indicates high spatial resolution, with minimal blurring. Broad, flat PSF: Indicates poor spatial resolution, with significant blurring. The PSF helps in assessing an imaging system’s ability to resolve small details. Mathematical Representation: o If the input image is f(x,y) and the imaging system’s PSF is h(x,y), the observed image g(x,y) is given by: Where * denotes convolution. Relationship with Resolution: o A narrower PSF indicates higher system resolution because less blurring occurs. o A broader PSF suggests poorer resolution and greater blurring. Physical Meaning: o PSF characterizes the optical or electronic imperfections in the imaging system. o In medical imaging, PSF determines the system’s ability to resolve fine anatomical details. Applications: o System Performance Evaluation: PSF is used to assess and compare imaging system quality. o Image Restoration: PSF knowledge enables deconvolution techniques to recover original image features. Modulation Transfer Function (MTF) The Modulation Transfer Function (MTF), defined as: , is a Fourier transform of the PSF and is indicative of the frequency characteristics of the imaging system. High MTF values are desirable as they imply better quality and resolution in the captured images. The Modulation Transfer Function (MTF) quantifies the ability of an imaging system to faithfully reproduce the modulation (contrast variations) of an object’s features, particularly those represented by sine waves (gradual variations in intensity). MTF measures how well the system reproduces the spatial frequencies of the object, describing the system's spatial resolution. Spatial Frequency: Refers to the level of detail in an image, typically measured in cycles per millimeter (cycles/mm). High spatial frequencies correspond to fine details, while low frequencies relate to broader features. Contrast Response Curve Limitation: The contrast response function, which shows the ability to distinguish features at various spatial frequencies, has limitations when test objects require very high frequencies (10–15 cycles/mm), which are hard to manufacture. MTF is typically represented as a curve where: Low spatial frequencies: MTF is high (close to 1, or 100%), meaning the system faithfully reproduces broad features. High spatial frequencies: As frequency increases, the MTF decreases, signifying that the system cannot faithfully reproduce fine details and resolution degrades. Example: o The curve starts at 100% at low frequencies and gradually decreases, eventually reaching 0% at high frequencies. o The point where MTF drops to 0.1 is often considered the cutoff frequency, above which no features can be resolved. MTF of the complete system: The overall MTF of an imaging system is the product of the MTFs of its individual components (e.g., focal spot, motion, intensifying screen). Example: For an x-ray film-screen system at 5 cycles/mm: Focal spot MTF: 0.9 Motion MTF: 0.8 Intensifying screen MTF: 0.7 Composite MTF: 0.9 × 0.8 × 0.7 = 0.5 The MTF of a system cannot exceed the weakest component’s resolution at any given frequency. Applications of MTF: System Evaluation: MTF is crucial for assessing spatial resolution in diagnostic imaging systems. Component Optimization: Understanding MTF helps identify system limitations and allows for improvement, such as enhancing focal spot size or optimizing motion correction. Practical Considerations: The cutoff frequency marks the point beyond which high-frequency details cannot be represented, and the resolving power of the system is defined at this frequency. Higher MTF values at each frequency correspond to better spatial resolution, meaning the system can resolve finer details in the image. Image Quality Factors: o Clarity: Sharpness and detail visibility. o Contrast: Ability to differentiate structures. o Noise: Random variations that obscure details. o Artifacts: Distortions caused by equipment or movement. Improving Quality: o Filters: Noise reduction (e.g., smoothing, median filters). o Contrast agents: Enhance visibility in modalities like CT and MRI. o Signal-to-noise ratio (SNR): Measures image clarity relative to noise. Medical Imaging (3) 3 Structure of the atom Bohr model Atomic number Z : # protons = # electrons Mass number A : # protons + # neutrons Probability of electrons location Electrons ( 2 n2, n - orbit no.) Nucleus (protons, neutrons) Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 4 Properties of the nuclei Hydrogen nuclei, 1 proton classical mechanics: proton – positively charged particle, circulates (spins) producing magnetic field. This “small magnet” has its magnetic moment. Rotating proton ó spin Other nuclei having a magnetic moment (# protons ≠ # neutrons): 15N, 31P, 23Na 12C i 16O do not have a magnetic moment Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 5 Dipolar fields Earth’s field, bar magnet or current loop µ magnetic moment Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 6 Protons in a magnetic field In a typical material, magnetic moments are oriented randomly B In a presence of the magnetic field, magnetic moments align themselves along the direction of the field Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 7 Spins in magnetic field Magnetic moments also precesses about the field. Spinning top like behavior: Rotation of the top about its own axis is first order motion. Precession of the top about the vertical axis B (gravity axis) is second order motion. Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 8 Precession results from the interaction of forces with a rotating object. Angular momentum and gravity interact to cause precession of a gyroscope; magnetic moment and magnetic field result in a precession of a proton. Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 9 The Larmor frequency The relationship between field induction B [T] and a precession frequency f [Hz] is: f=gB g is a gyromagnetic ratio g (H) = 42.58 MHz/T, thus for typical B=1.5T in MR scanners, f≈64 MHz The precession frequency is called Larmor frequency Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 10 External force and spins When force is applied to an object having an angular momentum, the resulting motion is at right angles to the force. Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 11 Attempting to push a gyroscope in the direction of precession cause the gyroscope to change its precession angle. Change of this angle is referred as nutation (third order motion). Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego 12 Magnetic field distribution for all protons transverse magnetization= 0 resulting longitudinal magnetization Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 13 Magnetization in static field Static B0 field, thermal equilibrium: more spin up than spin down, random phases B0 Mz ≠ 0 Myx = 0 Resulting magnetization M¹0 and // B0 Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego 14 The effect of an additional electromagnetic pulse The application of an RF (alternating magnetic field) pulse at the Larmor frequency will cause the phenomenon of resonance -> the pulse energy will be absorbed by the atomic nuclei, causing some of them to change the orientation of the magnetic moments to anti- parallel Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 15 Effect of additional field B1 perpendicular to B0 To introduce a variable B1 B0 magnetic field, the electromagnetic wave should be applied. nutation If the frequency of the RF wave is M equal to Larmor frequency, then the spin is in magnetic resonance. B1 The B1, perpendicular to static B0 causes the spin nutation (increasing the angle between magnetization vector and z axis) RF This wave corresponds to RF (radio frequency) waves Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 16 Effects of B1 field Variable B1 causes the following effects: - parallel orientation of all spins -> Mz decays - phasing of spins -> Mxy appears and increases Mz M Mz ê M Mz = 0 Myx = 0 Myx = M Myx ≠ 0 t0, B1 is on t1> t0 t2> t1 Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 17 Relaxation When B1 disappears, M returns the equilibrium state: - longitudinal Mz recovers - transverse Mxy decays - there are two different relaxation effects governed by different time constants, T1 and T2 z B0 M energy is given off by the nucleus -> Mz generation of electromagnetic pulse Mxy Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 18 Spin – spin (Transverse) relaxation – T2 Loss of coherence: reorientation and dephasing (spin-spin interactions - collisions, local B0 inhomogeneities) 90° t 37% M0 Mxy = 0 M0 1 2 2 1 3 3 3 1 2 3 T2 5T2 Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 19 Spin – spin (transverse) relaxation – T2 Quantitatively: exponential decay with time constant T2 M xy T2 constants [ms] at B0=1 T CSF Fat 84 Gray matter 37 % Muscle 45 W hite matter White matter 92 Fat Grey matter 101 84 92 101 1400 t [ms] CSF 1400 Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 20 Spin – lattice (longitudinal) relaxation – T1 Interaction with spin surroundings -> net release of energy -> protons return to the lower energy state of alignment 90° t Mz ~ M0 63% M0 Mz=0 T1 5T1 Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego Medical Imaging (7) 21 Spin – lattice (longitudinal) relaxation – T1 Quantitatively: exponential increase with time constant T1 Mz Fat White matter T1 constants [ms] at 1 T Gray matter Fat 240 63 % Muscle 730 CSF White matter 680 Grey matter 809 CSF 2500 t in ms 240 680 809 2500 T1>T2 Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego 22 Source of the MR signal (sygnał swobodnego zaniku indukcji) The changing magnetic field induces a current in the loop of the conductive wire – receive coil (Faraday's law = electromagnetic induction principle). A proton has a magnetic moment and therefore acts like a small magnet. Precesing protons whose magnetic fields intersect the plane of the coil induce an electric current. This current is the FID "signal" of the magnetic resonance induced in the receiver coil - it comes only from the transverse magnetization vector (!?!) Projekt współfinansowany przez Unię Europejską w ramach Europejskiego Funduszu Społecznego INFRARED IMAGING TECHNOLOGY HISTORICAL OVERVIEW 1737 - Emilie du Chatelet predicted theoretically the existence of infrared radiation 1800 - Sir William Herschel for the first time demonstrated the existence of infrared radiation HISTORICAL OVERVIEW 1860 - Gustav Kirchhoff formulates the blackbody theorem (E = J(T, l)) 1873 - Willoughby Smith discovers the photoconductivity of selenium 1879 - Steffan-Boltzmann law formulated empirically that the power radiated by a blackbody is proportional to T4 1880s - Lord Rayleigh and Wilhelm Wien both solve part of the blackbody equation, but these solutions are approximations that "blow up" out of their useful ranges (the so-called ultraviolet and infrared catastrophe) 1901 - Max Planck published the famous blackbody equation and theorem solving the problem by quantizing allowable energy transitions HISTORICAL OVERVIEW 1905 - Albert Einstein develops the theory of the photoelectric effect determining the photon 1917 - Theodor Case develops thallium sulfide detector and the British use in World War I the first infrared search and track system capable of detecting aircrafts at a range of one mile 1935 - Lead salts used for missile guidance purposes 1945 - The Zielgerät 1229 "Vampir" infrared weapon system is introduced as the first portable infrared device to be used in a military application 1952 - Welker discovers InSb 1950s - Paul Kruse (Honeywell) and Texas Instruments form first infrared images before 1955 HISTORICAL OVERVIEW 1958 - Lawson discovers IR detection properties of HgCdTe 1958 - Falcon & Sindewinder missiles developed using infrared technology 1961 - Cooper demonstrated pyroelectric detection 1965 - first commercial imagers (Agema, Hughes) 1965 - U.S. Army’s night vision lab formed (now Night Vision and Electronic Sensors Directorate) 1978 - Infrared imaging system in Mauna Kea observatory 1978 - 64x64 arrays produced in InSb and HgCdTe THERMAL INFRARED RADIATION All objects in the universe emit radiations in the IR region. When an object gets hotter, it gives far more intense infrared radiation, and it radiates at a shorter wavelength Planck's law describes the spectral radiance of electromagnetic radiation at all wavelengths emitted from a black body in a cavity in termodynamic equilibrium. As a function of frequency and absolute temperature T, Planck's law is written as: This function represents the emitted power per unit area of emitting surface, per unit solid angle, per unit frequency. The function peaks for hn = 2.82 kT THERMAL INFRARED RADIATION Wien's displacement law states that the wavelength distribution of radiated heat energy from a black body at any temperature has the same shape as the distribution at any other temperature, except that each wavelength is displaced on the graph. From this general law, it follows that there is an inverse relationship between the wavelength of the peak of the emission of a black body and its temperature, i.e.: THERMAL INFRARED RADIATION The Stefan–Boltzmann law states that the total energy radiated per unit surface area of a black body per unit time, j*, is directly proportional to the fourth power of the black body's temperature T: A more general case is of a grey body, which doesn't absorb or emit the full amount of radiative flux. Instead, it radiates a portion of it, characterized by its emissivity ε. To find the total absolute power of energy radiated for an object we have to take also into account the surface area A. Then: THERMAL INFRARED RADIATION THERMAL INFRARED RADIATION 6.626 0693(11)×10−34 J·s Planck's constant 4.135 667 43(35)×10−15 eV·s Wien's displacement constant 2.897 7685(51)×10−3 m·K 1.380 6505(24)×10−23 J·K−1 Boltzmann constant 8.617 343(15)×10−5 eV·K−1 Stefan–Boltzmann constant 5.670 400(40)×10−8 W·m−2·K−4 Speed of light 299,792,458 m·s−1 THERMAL INFRARED RADIATION IR radiation occupies the region between the visible light and microwaves in the electromagnetic spectrum. THERMAL INFRARED RADIATION IR radiation covers wavelengths that range from 750nm to 1mm. The human body emissions which are traditionally measured for diagnostic purposes only occupy a narrow band of 8mm to 12mm. This region is also referred to as the long-wave IR (LWIR) THERMAL INFRARED RADIATION THERMAL INFRARED RADIATION INFRARED DETECTORS Material Type Spectral range(μm) Indium gallium arsenide(InGaAs) photodiode 0.7-2.6 Germanium photodiode 0.8-1.7 Lead sulfide (PbS) photoconductive 1-3.2 Lead selenide (PbSe) photoconductive 1.5-5.2 Indium antimonide (InSb) photoconductive 1-6.7 Indium arsenide (InAs) photovoltaic 1-3.8 Platinum silicide (PtSi) photovoltaic 1-5 Indium antimonide (InSb) photodiode 1-5.5 Mercury cadmium telluride (MCT, HgCdTe) photoconductive 0.8-25 Mercury zinc telluride (MZT, HgZnTe) photoconductive Lithium tantalate (LiTaO3) pyroelectric Triglycine sulfate (TGS and DTGS) pyroelectric INFRARED DETECTORS INFRARED DETECTORS Cooled infrared detectors Cooled detectors are typically contained in a vacuum-sealed case and cryogenically cooled. Typical operating temperatures range from 4K to just below room temperature. Most modern cooled detectors operate in the 60 K to 100 K range. Without cooling these sensors would be 'blinded' or flooded by their own radiation. The drawbacks of cooled infrared cameras are that they are expensive to produce and to run. Cooling is both power and time consuming. The camera may need several minutes to cool down before it can begin working. Although the cooling apparatus is bulky and expensive, cooled infrared cameras provide superior image quality compared to the uncooled ones. INFRARED DETECTORS Cooled infrared detectors The most commonly used cooling systems are rotary Stirling engine cryocoolers. An alternative to Stirling engine coolers is to use gases bottled at high pressure, nitrogen being a common choice. The pressurized gas is expanded via a micro-sized orifice and passed over a miniature heat exchanger which results in regenerative cooling via the Joule-Thomson effect. For such systems the supply of pressurized gas is a concern for field use. Additionally, the greater sensitivity of cooled cameras also allows the use of higher F-number lenses, making high performance long focal length lenses smaller and cheaper for cooled detectors. INFRARED DETECTORS Cooled infrared detectors Materials used for cooled infrared detection include photodetectors based on a wide range of narrow gap semiconductors such as: InSb, InAs, HgCdTe, PbS, PbSe. Infrared photodetectors can be created also with structures of wide band gap semiconductors manufactured as Quantum Well Infrared Photodetectors (QWIP). Quantum wells for QWIPs are formed by ‘sandwiching’ a material, like GaAs, between two layers of a material with a wider bandgap, e.g. AlAs. In quantum wells electrons have a density of states with distinct steps and the effective mass of holes in the valence band more closely matches the one of electrons in the conduction band. These two factors are desicive for the high efficiency of quantum photodetectors. INFRARED DETECTORS Cooled infrared detectors The QWIP structures are grown mainly by the extremely expensive technique of the molecular beam epitaxy allowing the control of layer thickness at the level individual atomic monolayers. In principle superconducting tunneling junction devices could be also used as infrared sensors because of their narrow gap. Superconducting detectors offer extreme sensitivity, with some able to register individual photons. Although small detector arrays have been demonstrated, their use is difficult because they require careful shielding from the background radiation. Additionally, a number of cooled bolometer technologies exist. INFRARED DETECTORS Stirling engine INFRARED DETECTORS Thermoelectric Coolers Peltier element schematic. Thermoelectric legs are thermally in parallel and electrically in series. INFRARED DETECTORS Thermoelectric Coolers INFRARED DETECTORS Typical image from a QWIP detector INFRARED DETECTORS Uncooled infrared detectors The uncooled infrared cameras do not require a cooling system, hence they are much lighter, smaller, more reliable and less expensive compared to the cooled cameras. Two uncooled technologies were developed at about the same time, namely the Barium Strontium Titanate (BST) technology by Raytheon and the microbolometer technology by Honeywell. The BST cameras use a ferroelectric detector that converts the infrared energy to a change in capacitance. The BST detectors incorporate a mechanical chopper which rotates at 30 times per second to enable the sensor to refresh itself. Thus, the images produced are rather choppy, with dark ghosts around hot images and multiple images of the same object smeared across the screen during movement of the camera. INFRARED DETECTORS Uncooled infrared detectors The microbolometer technology is thermo-electric in nature and converts the infrared energy to a change in resistance instead of capacitance as in the BST technology. The microbolometer cameras do not require the chopper and thus can provide high quality images. The pictures are also smoother and clearer since the automatic brightness control instead of mechanical controls is achieved using advanced digital signal processing techniques. Microbolometers collect the light in the 7.5 μm to 14 μm spectral band. This spectral band provides a better penetration through smoke, smog, dust, water and vapor. On the other hand, they are less sensitive than cooled detector thermal imagers and they cannot be used for high-speed infrared applications INFRARED DETECTORS Uncooled infrared detectors The microbolometer sensitivity is partly limited by the thermal conductivity of the detector material. The speed of response depends on the thermal diffusivity, being the ratio of the thermal conductivity to the thermal heat capacity. Reducing the heat capacity increases the speed but also increases the thermal temperature fluctuations and consequently noise. Increasing the thermal conductivity raises the speed, but decreases sensitivity. INFRARED DETECTORS Typical specifications of the state-of-the-art microbolometer Pixel-pitch: 17 μm Dimensions (LxWxH): 37.5 x 37.5 x 12.54 mm3 Power consumption:

Use Quizgecko on...
Browser
Browser