HMI2102_202320_TM_Common_WK1-4_CLO1-2 Medical Imaging Science PDF
Document Details
Uploaded by EffectiveScandium8484
Higher Colleges of Technology
Tags
Summary
Medical Imaging Science lecture notes covering medical imaging technology II, including sections on medical imaging computer science, the power of two notation, bits, bytes, kilobytes, megabytes, and terabytes.
Full Transcript
The Campus of Tomorrow Medical Imaging Science Medical Imaging Technology II HMI 2102 CLO1 (Part I): Medical Imaging Computer Science Monday, January 8, 2024 CLO 1:...
The Campus of Tomorrow Medical Imaging Science Medical Imaging Technology II HMI 2102 CLO1 (Part I): Medical Imaging Computer Science Monday, January 8, 2024 CLO 1: Examine and discuss the science and technology of modern digital image recording and post processing – Weeks 1-4 (30%) CLO 1.1: Advantages and disadvantages of digital images over analog film images, image storage, cost, image quality and manipulation. Matrix size, pixels, voxels, bit depth. Look-up tables, image contrast and spatial resolution, regions of interest (ROI) and field of view (FOV) CLO 1.2: Digital imaging quality to include: Image quality, Line Spread Function (LSF), Detective quantum efficiency (DQE), Modulation Transfer Function (MTF), image histograms and exposure indices CLO 1.3: CR cassettes and readers function, image processing including viewing parameters, image manipulation and dose considerations. Direct digital radiography (DR) detectors image processing including viewing parameters, image manipulation and dose considerations. 2 Outline Introduction The Power of 2 Notation Bit, Byte, kB, MB, GB, and TB Computer Architecture Computer Principal Parts Computer Program Computer Components Computer Applications to Medical Imaging 3 Introduction The computer has become evident in everyday life. Example: video games, automatic teller machines, highway toll systems, supermarket checkouts, ticket reservation centers, industrial processes, smart phones, traffic lights, and automobile ignition and guidance systems. 4 Cont’d Computer applications in radiology also continue to grow. Examples: Computed tomography, magnetic resonance imaging, diagnostic ultrasonography, and nuclear medicine imaging. Computers control high-voltage x-ray generators and radiographic control panels, making digital fluoroscopy and digital radiography routine. 5 But what are computers? The word computer is used as an abbreviation for any general- purpose, stored-program electronic digital device. - General purpose means the computer can solve problems. - Stored program means the computer has instructions and data stored in its memory. - Electronic means the computer is powered by electrical and electronic devices. - Digital means that the data are in discrete values. 6 The Power of 2 Notation Used in radiologic imaging to describe image size, image dynamic range (shades of gray), and image storage capacity. Examples on image size: - Digital images are made of discrete picture elements, pixels, arranged in a matrix. MRI and CT: 256 × 256 (28) to 1024 × 1024 (210). Digital fluoroscopy: 1024 × 1024 (210). Digital radiography: 2048 × 2048 (211). Digital mammography: 4096 × 4096 (212). 7 Bit, Byte, kB, MB, GB, and TB A bit describes a binary digit (0 or 1). The 26 characters of the alphabet, numerals and other special characters are usually encoded by 8 bits. 8 Cont’d 1 byte (B) = 8 bits 1 kilobyte (kB) = 210 = 1,024 bytes 1 megabyte (MB) = 1 kB × 1 kB = 210 × 210 = 220 = 1,048,576 bytes 1 gigabytes (GB) = 1 kB × 1 kB × 1 kB = 210 × 210 × 210 = 230 = 1,073,741,824 bytes = 1,024 MB 1 terabytes (TB) = 1 kB × 1 kB × 1 kB × 1kB = 210 × 210 × 210 × 210 = 240 = 1,099,511,627,776 bytes = 1,024 GB 9 Cont’d Each pixel contains a series of 1s and 0s (bit) defining the grayscale or shade of that particular point on a digital x-ray image. - Example: Digital mammogram has 16 bit dynamic range. Computer storage capacity is also expressed by the number of bytes that can be accommodated. - The computers typically used in radiology departments have capacities measured in GB or TB. 10 Quiz Q: How many shades of gray can digital mammography display? A: 216 = 65,536 shades of gray. 11 Quiz Q: How much storage space (MB) do you think a 16 bit 2000 × 2500 pixel x-ray image would take? A: 16 bits × 2000 × 2500 pixels = 80,000,000 bits = 10,000,000 bytes 10,000,000 bytes / 1024 = 9,765.625 kB 9,765.625 kB / 1024 ≈ 9.5 MB 12 Computer Principal Parts A computer has two principal parts: A. The hardware: Everything about the computer that is visible. Input, processing, memory, storage, output, and communications. B. The software: The computer programs that tell the hardware what to do and how to store and manipulate data. 13 Computer Program A sequence of instructions developed by a software programmer. Two classifications: A. Systems software Consists of programs that make it easy for the user to operate a computer to its best advantage. Operating systems (e.g., MAC-OS, Windows, and UNIX) B. Application programs Computer programs that are written by a computer manufacturer, by a software manufacturer, or by the users themselves to guide the computer to perform a specific task. Constitute most computer programs as we know them (e.g., Microsoft Office, Zoom, and Chrome). They are written in one of many high-level computer languages (e.g., Java, C++, and Python) 14 Cont’d 15 Computer Components 1. Central processing unit (CPU) 2. Random access memory (RAM) 3. Read-only memory (ROM) 4. Motherboard 5. Secondary memory (Storage) 6. Output hardware 7. Input hardware 16 Central Processing Unit (CPU) The primary element that allows the computer to manipulate data and carry out software instructions. Often referred to as the microprocessor. Examples: Intel Core i9 and AMD Vermeer Ryzen 5000. Microprocessor speeds usually are defined in megahertz (MHz) (1 million cycles per sec). - Today’s computers commonly run at up to several gigahertz (GHz; 1 GHz = 1000 MHz). 17 Random Access Memory (RAM) Aka. Main memory, primary storage, or internal memory. Its contents are temporary. All data processed by a computer pass through the RAM. The most efficient computers, therefore, have enough main memory to store all data and programs needed for processing. RAM capacity usually is expressed as megabytes (MB), gigabytes (GB), or terabytes (TB). 18 Read-only Memory (ROM) Contains information supplied by the manufacturer, called firmware that cannot be written on or erased. One ROM chip contains instructions that tell the CPU what to do when the system is first turned on. Another ROM chip helps the CPU transfer information among the screen, the printer, and other peripheral devices to ensure that all units are working correctly. 19 Motherboard The main circuit board in a system unit. Contains the microprocessor, any coprocessor chips, RAM chips, ROM chips, other types of memory, and expansion slots. 20 Secondary Memory (Storage) An archival form of memory. Examples: - Optical storage devices: Compact Discs (CDs) ~800 MB Digital Video Discs (DVDs) ~4.7 GB Blu-ray ~25 GB - Hard Disc Drives (HDD) ~500 GB to several TBs - Solid-state storage Devices (SSD) Flash drives ~8 GB to several TBs Internal SSD ~500 GB to several TBs - Cloud-based storage services (Dropbox and Google Drive): As much as you pay! CDs, DVDs, and flash drives are today’s common transferable storage devices. 21 Output and Input Hardware Output hardware - Examples: Printers, audio output devices, display screen or monitor. - Soft copy is the term that refers to the output seen on a display screen. Input hardware - Examples: keyboards, mice, trackballs, touchpads, and source data entry devices (include scanners, fax machines, imaging systems, audio and video devices, electronic cameras, voice-recognition systems, sensors, and biologic input devices). - A keyboard includes: Standard typewriter keys that are used to enter words and numbers Function keys that enter specific commands. Digital fluoroscopy uses function keys for masking, reregistration, and time-interval difference imaging. 22 Cont’d 23 Computer Applications to Medical Imaging It would be difficult to find a radiology department in the UAE that does not contain at least one computer. Computers in radiology departments are typically used to store, transmit, and read imaging examinations. In addition to the pixel information contained in the image, a typical x-ray image contains information about the patient, type of examination, and place of examination. - This information is stored in the image in what is called the header. 24 Cont’d Computers are becoming so advanced that now many mobile smart phones and tablets available today are more powerful than large computers available less than a decade ago. - This may further change the practice of medical imaging and medicine as well. “The FDA approved the first application that allows for the viewing of medical images on a mobile phone in 2011” “The Ambra Health mobile app provides medical image access across Connecticut Orthopaedic Specialists’ 21 locations.” https://www.itnonline.com/article/mobile-device-app-viewing-radiology 25 Summary A computer has two principal parts: the hardware and the software. Hardware consists of several types of components, including a CPU, memory units, input and output devices, and secondary memory devices. The basic parts of the software are the bits and bytes. Computer capacity is typically expressed in gigabytes or terabytes. Computers use a specific language to communicate commands in software systems and programs. Computers have greatly enhanced the practice of medical imaging. Computers have advanced to virtually eliminate the need for hard copy medical images. 26 Thank You 800 MyHCT (800 69428) www.hct.ac.ae The Campus of Tomorrow Medical Imaging Science Medical Imaging Technology II HMI 2102 2. CLO1 (Part II): Computed Radiography Wednesday, January 17, 2024 Objectives At the completion of this lecture, you should be able to do the following: 1. Describe several advantages of computed radiography over screen/film radiography. 2. Identify workflow changes when computed radiography replaces screen-film radiography. 3. Discuss the relevant features of a storage phosphor imaging plate. 4. Explain the operating characteristics of a computed radiography reader. 5. Discuss spatial resolution, contrast resolution, and noise related to computed radiography. 6. Identify opportunities for patient radiation dose reduction with computed radiography. 2 Outline The Computed Radiography Imaging Characteristics Image Receptor - Image Receptor Response - Photostimulable Luminescence Function - Imaging Plate - Image Noise - Light Stimulation–Emission Patient Characteristics The Computed Radiography - Patient Radiation Dose Reader - Workload - Mechanical Features - Optical Features - Computer Control 3 Introduction Digital imaging began with computed tomography (CT) and magnetic resonance imaging (MRI). Digital radiography was introduced in 1981 by Fuji with the first commercial computed radiography (CR) imaging system. Today medical imaging is complemented by multiple forms of digital radiography (DR) in addition to CR. At this time, CR is the most widely used DR modality. Although other DR systems are increasingly in use, it seems there will always be a need for CR because of its unique properties. Much of the information relevant to CR applies also to DR because CR is a form of DR. 4 Sequence of activity for screen-film radiography CR imaging eliminates some of these steps and can produce better medical images at lower patient dose. 5 The Computed Radiography Image Receptor Similarities between screen-film imaging and CR imaging. 1. Both use as the image receptor an x-ray–sensitive plate that is encased in a protective cassette. 2. The two techniques can be used interchangeably with any x-ray imaging system. 3. Both produce a latent image that must be made visible via processing. 6 Photostimulable Luminescence and Photostimulable Phosphors Some materials emit light promptly following x-ray exposure. - These are called photostimulable phosphors (PSP). - E.g., barium fluorohalide with europium (BaFBr:Eu or BaFI:Eu) PSP also emit light some time later when exposed to a different light source. - This process is called photostimulable luminescence (PSL). A note on Europium (Eu) - Present in very small amounts. - It is an activator and is responsible for the storage property of the PSL. - Without it there would be no latent image. 7 Cont’d Over time, the metastable electrons return to the ground state on their own. This can be accelerated or stimulated by exposing the phosphor to intense infrared light from a laser. 8 Storage Phosphor Screens The PSP, barium fluorohalide, is fashioned similarly to a radiographic intensifying screen. Because the latent image occurs in the form of metastable electrons, such screens are called storage phosphor screens (SPSs). SPSs are mechanically stable, electrostatically protected, and fashioned to optimize the intensity of stimulated light. 9 Cont’d PSP particles are either: - randomly positioned throughout a binder (3–10 μm), or - grown as linear filaments that enhance the absorption of x-rays and limit the spread of stimulated emission. 10 Imaging Plate The PSP screen is housed in a rugged cassette that appears similar to a screen-film cassette. In this form as an image receptor, the PSP screen- cassette is called an imaging plate (IP). The IP has lead backing that reduces backscatter x-rays. - This improves the contrast resolution of the image receptor. 11 Light Stimulation–Emission Light is emitted when an PSP crystal is illuminated. The sequence of events engaged in producing a PSL signal include: 1. Exposure. 2. Stimulation. 3. Reading. 4. Erasing. 12 1. Exposure When an x-ray beam exposes a PSP, the energy transfer results in excitation of electrons into a metastable state. Approximately 50% of these electrons return to their ground state immediately, resulting in prompt emission of light. The remaining metastable electrons return to the ground state over time. - This causes the latent image to fade. - Requires that the IP must be read soon after exposure. - CR signal loss is objectionable after approximately 8 hours. 13 2. Stimulation A finely focused beam (50 to 100 μm) of infrared light (laser) is directed at the PSP. - The diameter of the laser beam determines the spatial resolution of the CR imaging system. As laser beam intensity increases, so does the intensity of the emitted signal. Note that as the laser beam penetrates, it spreads. - The amount of spread increases with PSP thickness. 14 3. Reading The laser beam causes metastable electrons to return to the ground state with the emission of a shorter wavelength light in the blue region of the visible spectrum. Through this process, the latent image is made visible. Some signal is lost as the result of: 1. Scattering of the emitted light. 2. The collection efficiency of the photodetector. Photodiodes (PDs) are the light detectors of choice for CR. 15 4. Erasing The stimulation cycle does not completely transition all metastable electrons to the ground state. If residual latent image remained, ghosting could appear on subsequent use of the IP. Any residual latent image is removed by flooding the phosphor with very intense white light from a bank of specially designed fluorescent lamps. Optical filters are necessary to allow only emitted light to reach the photodetector while blocking the intense stimulating light. 16 The Computed Radiography Reader Commercial CR readers. 17 Mechanical Features When the CR cassette is inserted into the CR reader, the IP is removed and is fitted to a precision drive mechanism. - This drive mechanism moves the IP constantly yet slowly (“slow scan”) along the long axis of the IP. - Small fluctuations in velocity can result in banding artifacts. 18 Banding artifact Mechanical Features While the IP is being transported in the slow scan direction, a deflection device (a rotating polygon or an oscillating mirror) deflects the laser beam back and forth across the IP. - This is the fast scan mode. 19 Optical Features Components of the optical subsystem include: 1. Laser. 2. Beam-shaping optics. 3. Light-collecting optics. 4. Optical filters. 5. Photodetector. 20 Optical Features The laser is the source of stimulating light. The laser spreads as it travels to the deflection device. Focused by a lens system. 21 Cont’d As the laser beam is deflected across the IP, it changes size and shape. Special beam-shaping optics keeps the beam size, shape, speed, and intensity constant. Emitted light from the IP is channeled into a funnel-like fiber-optic collection assembly and is directed at the photodetector. 22 Cont’d Before photodetection occurs, the light is filtered so that none of the long-wavelength stimulation light reaches the photodetector. - Improves the signal-to-noise ratio. 23 Computer Control The output of the photodetector is a time-varying analog signal. The signal is transmitted to a computer system. The analog signal is processed for amplitude, scale, and compression. - This shapes the signal. Then the analog signal is digitized. - Sampling. - Quantization. An image buffer (hard disc) stores the completed image temporarily until it is transferred to a workstation or to an archival computer. 24 Imaging Characteristics The four principal characteristics of any medical image are: 1. Spatial resolution. 2. Contrast resolution. 3. Noise. 4. Artifacts. These are different for all DR, including CR from screen-film imaging (discussed in greater depth later). Other imaging characteristics include: 5. Image receptor response function. 6. Image noise. 25 Image Receptor Response Function Characteristic curve in S/F = Image receptor response function in DR and CR. Radiographic technique is so critical in S/F imaging. - The response of S/F extends through an optical density (OD) range from 0 to 3 (i.e., 1000 gray levels). - However, the S/F image can display only approximately 30 shades of gray on a viewbox. - Most S/F imaging techniques aim for radiation exposure on the toe side of the characteristic curve. 26 Cont’d CR imaging is characterized by extremely wide latitude. Five decades of radiation exposure results in almost 100,000 gray levels. - Each gray level can be evaluated visually by postprocessing. - A 14-bit CR image has 16,384 gray levels. 27 Cont’d Proper radiographic technique and exposure are essential for S/F radiography. - Overexposure and underexposure. With CR, radiographic technique is not as critical because contrast does not change over five decades of radiation exposure. The conventional approach that “kVp controls contrast” and “mAs controls OD” does not hold for CR. 28 Image Noise 29 Cont’d Fortunately, CR noise sources are bothersome only at very low image receptor radiation exposure. Newer CR systems have lower noise levels and therefore additional patient radiation dose reduction is possible. 30 Patient Characteristics Include: A. Patient radiation dose B. Workload 31 Patient Radiation Dose At low radiation exposure CR is a faster image receptor than S/F system. - i.e., lower patient radiation dose should be possible with CR. - However, noise is an issue at lower radiographic technique (discussed later). Because CR image contrast is constant regardless of radiation exposure, images can be made at higher kVp and lower mAs, resulting in additional reduction in patient radiation dose. 32 Cont’d A useful rule of thumb is that current “average” S/F exposure factors represent the absolute maximum factors for the body part in CR. 33 Workload The transition from S/F radiography to CR brings several significant changes. 1. Wide exposure latitude Fewer repeat examinations. 2. Improved contrast resolution. 3. Reduced patient radiation dose. 4. No need to reload the cassette lower workload. 34 Summary The first applications of DR appeared in the early 1980s as CR. CR is based on the phenomenon of PSL. X-rays interact with an SPS and form a latent image by exciting electrons to a higher energy metastable state. In the CR reader the latent image is made visible by releasing the metastable electrons with a stimulating laser light beam. On returning to the ground state, electrons emit shorter wavelength light in proportion to the intensity of the x-ray beam. The emitted light signal is digitized and reconstructed into a medical image. The value of each CR pixel describes a linear characteristic curve over five decades of radiation exposure and a 100,000 grayscale. This wide latitude can result in reduced patient radiation dose and improved contrast resolution. 35 Abbreviations 36 Thank You 800 MyHCT (800 69428) www.hct.ac.ae The Campus of Tomorrow Medical Imaging Science Medical Imaging Technology II HMI 2102 3. CLO1 (Part III): Digital Radiography Wednesday, January 24, 2024 Objectives At the completion of this lecture, you should be able to do the following: 1. Identify five digital radiographic modes in addition to computed radiography. 2. Define the difference between direct digital radiography and indirect digital radiography. 3. Describe the capture, coupling, and collection stages of each type of digital radiographic imaging system. 4. Discuss the use of silicon, selenium, cesium iodide, and gadolinium oxysulfide in digital radiography. 2 Outline Scanned Projection Radiography Charge-Coupled Device Cesium Iodide/Charge-Coupled Device Cesium Iodide/Amorphous Silicon Amorphous Selenium 3 Introduction The acceleration to all-digital imaging continues because it provides solution for the several significant disadvantages of S/F radiography: Require chemical processing time, space, and cost. Virtually static (can’t be enhanced). Hard copy storage rooms, transportation (more people). Can be viewed only in a single place at one time. Digital radiography is more efficient in time, space, and personnel than S/F radiography. 4 The Organizational Scheme for DR The vocabulary applied to digital radiography is not yet standard or universally accepted. 5 Cont’d 1. Capture element - Is that in which the x-ray is captured. - In CR PSP. - In the other DR modes sodium iodide (NaI), cesium iodide (CsI), gadolinium oxysulfide (GdOS), or amorphous selenium (a-Se). 2. Coupling element - Is that which transfers the x-ray–generated signal to the collection element. - A lens or fiber-optic assembly, a contact layer, or a-Se. 3. Collection element - Is that which collects the electronic signal. - A photodiode, a charge-coupled device (CCD), or a thin-film transistor (TFT). 6 Scanned Projection Radiography (SPR) A finely collimated beam (fan beam) is scanned across the anatomy. Used in all CT machines to facilitate patient positioning imaging volume CT vendors give this process various trademarked names (scout, scanogram, topogram, localizer, surview, or pilot scan). 7 Cont’d During the 1980s and the early 1990s, SPR was developed for dedicated chest DR. The principal advantage of SPR was collimation to a fan x-ray scatter radiation rejection improvement image contrast. - Prepatient and postpatient collimators. X-rays are collimated to the detector array: a scintillation phosphor (NaI or CsI). The detector array is married to a linear array of CCDs through a fiber-optic light path. This development was not very successful. - Chest anatomy has high subject contrast. - The scanning motion required several seconds, resulting in motion blur. 8 Charge-coupled Device (CCD) Was developed in the 1970s as a highly light-sensitive device for military use. Has major application in astronomy and digital photography. Silicon-based semiconductor. Has three principal advantageous imaging characteristics: 1. Sensitivity. 2. Dynamic range. 3. Size. 9 Cont’d 1. Sensitivity - The ability to detect and respond to very low levels of visible light. - This is important for photographing in low light and radiographing in low dose. 2. Dynamic range - The ability to respond to a wide range of light intensity, from very dim to very bright. - The CCD has higher sensitivity for radiation and a much wider dynamic range than S/F receptors. 3. Size: - A CCD is very small Highly adaptable to DR in its various forms. - Measures ~1-2 cm. 10 Cesium Iodide (CsI)/CCD Uses tiled CCDs receiving light from a scintillator (CsI phosphor). - Allows the use of an area x-ray beam (in contrast to SPR, exposure time is short). CsI has a high photoelectric capture because the atomic number of cesium is 55 and that of iodine is 53. - X-ray interaction with CsI is high, resulting in low patient radiation dose. 11 Cont’d The scintillation light is efficiently transmitted through fiber-optic bundles to the CCD array. - High x-ray capture efficiency. - Good spatial resolution (up to 5 lp/mm). CsI/CCD is an indirect DR process. - X-rays are converted first to light and then to electronic signal. 12 CsI/Amorphous Silicon CsI/a-Si is an indirect DR process. Capture element: CsI (or GdOS). Coupling element: Amorphous silicon (a-Si) - Silicon is a semiconductor. - a-Si is not crystalline but is a fluid. Collection element: Thin-film transistor (TFT). - TFT is a device used to switch electronic signals. The image receptor is fabricated into individual pixels [known as Active Matrix Array (AMA)]. - Each pixel has a light-sensitive face of a-Si with a capacitor and a TFT embedded. 13 The Fill Factor Dilemma! A large portion of the face of the pixel is covered by electronic components (conductors, capacitors, and the TFT) and wires that are not sensitive to the light emitted by the CsI phosphor. - Fill factor: The percentage of the pixel face that is sensitive to x-rays is the. - Fill factor ~80%. ↓ pixel size ↑ spatial resolution ↓ pixel size ↓ fill factor ↑ exposure ↑ patient radiation dose. Nanotechnology promises increased fill factor and improved spatial resolution at even lower patient radiation doses. 14 Amorphous Selenium (a-Se) A direct DR process by which x-rays are converted to electronic signal. Capture and coupling elements: a-Se. - ~200 μm thick. - Sandwiched between charged electrodes. Collection element: TFT. X-ray interacts with a-Se producing a charged pair. The created charge is collected by a storage capacitor. The charge remains in the capacitor until the signal is read by the switching action of the TFT. 15 Several additional steps are eliminated when progressing from S/F radiography through CR to DR 16 Useful Videos 1. Digital Radiography System Explained (step-by-step) (5:33): https://www.youtube.com/watch?v=YzV1kovMjkI&t=5s 2. DR Digital Radiography System (4:28): https://www.youtube.com/watch?v=JSZI4U6OBvY 17 Summary Now we are in the midst of a rapid transfer of technology to DR. SPR provides the advantage of scatter radiation reduction caused by x-ray beam collimation. CsI or GdOS scintillation phosphor can be used as the capture element for x-rays in indirect DR methods. This signal is: - channeled to a CCD through fiber-optic channels, or - conducted to an AMA of TFTs, whose sensitive element is a-Si. a-Se is used as a capture element for x-rays in direct DR method. Spatial resolution is limited to pixel size in DR. DR prevails in contrast resolution. 18 Thank You 800 MyHCT (800 69428) www.hct.ac.ae The Campus of Tomorrow Medical Imaging Science Medical Imaging Technology II HMI 2102 4. CLO1 (Part IV): Digital Radiographic Technique Monday, January 29, 2024 Objectives At the completion of this lecture, you should be able to do the following: 1. Distinguish between spatial resolution and contrast resolution. 2. Identify the use and units of spatial frequency. 3. Interpret a modulation transfer function curve. 4. Discuss how postprocessing allows the visualization of a wide dynamic range. 5. Describe the features of a contrast-detail curve. 6. Discuss the characteristics of digital imaging that should result in lower patient radiation doses. 2 Outline Spatial Resolution - Spatial Frequency - Modulation Transfer Function Contrast Resolution - Dynamic Range - Postprocessing Contrast-Detail Curve Patient Radiation Dose Considerations - Image Receptor Response - Detective Quantum Efficiency 3 Introduction Digital radiographic technique is similar to S/F radiography except that kVp as a control of image contrast is not so important. Proper digital radiographic technique should result in reduced patient radiation dose. The four principal characteristics of any medical image are: 1. Spatial resolution. 2. Contrast resolution. 3. Noise. 4. Artifacts. 4 Spatial Resolution Definition: The ability of an imaging system to resolve and render on the image a small high-contrast object. Most people can see objects as small as 200 μm. - The spatial resolution of the eye is described as 200 μm. If the dots were not high contrast, the spatial resolution of the eye would require larger dots. In medical imaging, spatial resolution is described by the quantity “spatial frequency”. 5 Spatial Frequency The fundamental concept of spatial frequency does not refer to size but to the line pair. A line pair is a black line on a light background. One line pair consists of the line and an interspace of the same width as the line. Six line-pair patterns are shown. 6 Cont’d Spatial frequency is expressed in line pair per millimeter (lp/mm). As the spatial frequency becomes larger, the objects become smaller. Higher spatial frequency indicates better spatial resolution. “The smallest resolvable object size = 1 / (Spatial Frequency × 2)” “Spatial Frequency = 1 / (The smallest resolvable object size × 2)” 7 Quiz Q: A digital radiographic imaging system has a spatial resolution of 3.5 lp/mm. How small an object can it resolve? A: 3.5 lp/mm = 7 objects in 1 mm, or 7/mm The reciprocal is the answer 1/7 mm = 0.143 mm = 143 μm 8 Quiz Q: A S/F mammography imaging system operating in the magnification mode can image high-contrast microcalcifications as small as 50 μm. What spatial frequency does this represent? A: It takes two 50-μm objects to form a single line pair. 1 lp = 100 μm, or 1 lp/100 μm = 1 lp/0.1 mm = 10 lp/mm. 9 Quiz Q: The image from a nuclear medicine gamma camera can resolve just 1/4 inch. What spatial frequency does this represent? A: 1/4 in × 25.4 = 6.35 mm It takes two 6.35-mm objects to form a line pair, hence 12.7 mm/lp. The reciprocal is 1 lp/12.7 mm = 0.08 lp/mm = 0.8 lp/cm. 10 Approximate Spatial Resolution for Various Medical Imaging Systems Sometimes the spatial resolution for NM, CT, and MRI is stated in terms of lp/cm instead of lp/mm. 11 Anatomy can be described as having spatial frequency Large soft tissues (e.g., liver, kidneys, and brain) have low spatial frequency easy to image. Bone trabeculae, breast microcalcifications, and contrast-filled vessels are high-spatial-frequency objects more difficult to image. 12 Spatial Resolution Determinants in Projection Radiography 1. The geometry of the system, especially focal-spot size. - Mammography has the best spatial resolution because of its small focal spot (0.1 mm). 2. Pixel size. - No digital imaging system can image an object smaller than 1 pixel. 13 Quiz Q: What is the spatial resolution of a 512 × 512 CT image that has a field of view of 30 cm? What spatial frequency 30 does cm that represent? 1 2 3 4 5 6 7 512 A: 2 512 pixels/30 cm = 512 pixels/300 mm 3 30 cm 300 mm/512 pixels = 0.59 mm/pixel 4 Two pixels are required to form a line pair 5 2 × 0.59 mm = 1.2 mm/lp 256 1 lp/1.2 mm = 0.83 lp/mm = 8.3 lp/cm This CT imaging system is limited to a spatial resolution of 0.59 mm or 0.83 lp/mm. 14 Modulation Transfer Function (MTF) MTF is a description of the ability of an imaging system to faithfully render objects of different sizes onto an image. It can be viewed as the ratio of image to object as a function of spatial frequency. The ideal imaging system is one that produces an image that appears exactly as the object. - MTF = 1. - Does not exist. Objects with high spatial frequency are more difficult to image than those with low spatial frequency. 15 Cont’d The line pairs become more blurred with increasing spatial frequency. The amount of blurring can be represented by the reduced amplitude of the representative signal frequency. Blurring represents deterioration in image contrast. - Larger objects can be presented with higher contrast than smaller objects 16 Cont’d Imaging System Imaging System 17 https://www.edmundoptics.com/knowledge-center/application-notes/optics/introduction-to-modulation-transfer-function/ Cont’d Imaging system spatial resolution is spatial frequency at 10% MTF. - At high spatial frequencies, contrast is lost; this limits the spatial resolution of the imaging system. The MTF curve in Figure 17-7 is representative of S/F radiography. - At low spatial frequencies (large objects), good reproduction is noted on the image. - As the spatial frequency of the object increases (the objects get smaller), the faithful reproduction of the object on the image gets worse. - This MTF curve shows a limiting spatial 18 resolution of ~8 lp/mm. Example: S/F Radiography vs. S/F Mammography The single screen and smaller focal spot result in better MTF with mammography. 19 No DR imaging system can resolve an object smaller than the pixel size The MTF curve that represents DR has the distinctive feature of a cutoff spatial frequency. DR has higher MTF at low spatial frequencies. - This is principally because of the expanded dynamic range of DR and its higher 10 lp/mm detective quantum efficiency (DQE) 1/(2xSF) (both discussed later). =0.05 mm 20 Quiz Q: Figure 17-10 indicates a cutoff spatial frequency of 4 lp/mm for DR. What is the pixel size? A: 4 lp/mm = 8 objects/mm = 8 pixels/mm Pixel size is 1/8 mm = 0.125 mm = 125 μm 21 Contrast Resolution Definition: The ability to distinguish many shades of gray from black to white. All digital imaging systems have better contrast resolution than S/F imaging. The principal descriptor for contrast resolution is grayscale, also called dynamic range. 22 Dynamic Range Definition: The number of gray shades that an imaging system can reproduce. The dynamic range of a S/F radiograph is essentially three orders of magnitude. - i.e., 10 × 10 × 10 = 1,000 dynamic range. - OD: 0-3. - But the viewer can visualize only about 30 shades of gray. The dynamic range of digital imaging systems is identified by the bit capacity (depth) of each pixel. 23 Dynamic Range of Digital Medical Imaging Systems 24 Cont’d VS. Over the range of exposures used for S/F imaging, the response of a digital imaging system is four to five orders of magnitude. With the postprocessing exercise of window and level, each grayscale can be visualized—not just 30 or so. 25 Postprocessing With S/F radiographic images, one cannot extract more information than is visible on the image. A principal advantage of digital imaging is the ability to postprocess the image for the purpose of extracting even more information. For most people, approximately 30 gray levels is about the limit of contrast resolution. Window and level make possible visualization of the entire dynamic range of the grayscale. 26 Cont’d With window and level postprocessing tool, any region of this 16,384 grayscale can be expanded into a white-to-black grayscale. Especially helpful when soft tissue images are evaluated. 27 Cont’d Ref: Digital versus screen film mammography: a clinical comparison, 2008. For the younger, denser breasts, DM is better than S/F mammography. For older, less dense breasts, DM is equal to S/F mammography. 28 Activity Copy and paste the following link into your internet browser and explore windowing http://www.dicomlibrary.com/?study=1.2.826.0.1.3680043.8.1055.1.20 111102150758591.92402465.76095170 29 Contrast-detail Curve Another method for evaluating the spatial resolution and contrast resolution of an imaging system. Quality control test tools can be used to construct the contrast- detail curve for any imaging system. - Have rows of holes of varying sizes. - Each row is associated with a column of holes of the same size that are drilled to a different depth. - Fashioned into a plastic or aluminum sheet. 30 Cont’d Contrast-detail curve is a plot of the perceptible visualization of size as a function of object contrast. The Gammex 1151 Contrast/Detail Phantom 31 Ref: Image analysis methods for diffuse optical tomography, 2006 Cont’d When object contrast is high, small objects can be imaged. When object contrast is low, large objects are required for visualization on an image. The left side of the curve, which related to high-contrast objects, is limited by the MTF (spatial resolution) of the imaging system. The right side of the curve, which relates to low-contrast objects, is noise limited. - Noise reduces contrast resolution. 32 Cont’d 33 http://www.impactscan.org/slides/impactcourse/noise_and_low_contrast_resolution/img63.html Improving the Contrast-detail Curve Systems with smaller pixel size will have better spatial resolution, but the contrast resolution of both will be the same. If the imaging technique (mAs) is increased, spatial resolution remains the same, but contrast resolution is improved (↓ noise). 1.2 mm pixel 0.4 mm pixel 34 Contrast-detail Curves for Various Medical Imaging Systems 50 35 Patient Radiation Dose Considerations With digital imaging, patient doses can be reduced by 20%-50%. However, “dose creep” has occurred. Patient radiation dose reduction should be possible because of: 1. The manner in which the digital image receptor responds to x-rays. 2. A property of the digital image receptor known as DQE. 36 Image Receptor Response The curves in the figure relate to the contrast resolution; they do not represent spatial resolution. - Spatial resolution in S/F radiography is determined principally by focal-spot size. - Spatial resolution in DR is determined by pixel size. When S/F overexposed or underexposed, image contrast is reduced. - The exposure factor-related repeat rate for S/F is ~5%. DR cannot be overexpose or underexpose. - Should never require repeating because of exposure factors. 37 Cont’d S/F DR 38 Cont’d For S/F imaging, kVp controls contrast, and mAs controls OD. kVp is less important for DR contrast. - kVp should be increased ↓ absorbed dose. - mAs should be decreased ↓ absorbed dose. - Adequate contrast resolution + constant spatial resolution + reduced patient radiation dose. The patient radiation dose reduction that is possible in DR is limited by noise (↓SNR) ↓ contrast. 39 Detective Quantum Efficiency DQE is the absorption coefficient of an image receptor. Patient dose in DR should be low because of high DQE of the image receptor. Determined by the thickness of the capture layer and its atomic composition, and the x-ray energy. 40 Cont’d Fewer x-rays (lower patient radiation dose) are required by the higher DQE receptors to produce an image. DQE for DR is higher than that for S/F. 41 Cont’d Most x-rays have energy that matches the K-shell binding energy. - This relates to greater x-ray absorption at that energy. When x-ray beam interacts with the patient, most of the x-rays are scattered reduced in energy greater absorption. 42 Summary The DR image is limited by one deficiency when compared with S/F radiography—spatial resolution. Spatial resolution, the ability to image small high-contrast objects, is limited by pixel size in DR. Digital images are obtained faster than S/F images because wet chemistry processing is unnecessary. Digital images can be viewed simultaneously by multiple observers in multiple locations. Digital images can be transferred and archived electronically, thereby saving image retrieval time and film file storage space. Digital images have a wider dynamic range, resulting in better contrast resolution. With windowing, thousands of gray levels can be visualized, allowing extraction of more information from each image. Perhaps the principal favorable characteristic of digital imaging is the opportunity for patient radiation dose reduction. This occurs because of the linear manner in which the image receptor responds to x-rays and because of the greater DQE of the digital image receptor. 43 Thank You 800 MyHCT (800 69428) www.hct.ac.ae