Summary

This lecture notes document covers digital detectors, digital images, and pre/post-processing tools in digital radiography. It discusses indirect and direct read systems, and flat panel detectors. Image processing techniques and algorithms for edge enhancement and smoothing are also explained.

Full Transcript

find more resources at oneclass.com Week 10 Lecture 1 November 13th, 2018 • • • • Digital detectors Digital images Detectors Pre and postprocessing tools Objectives • Convolution • Kernel • Edge enhancement • Smoothing • WW • WL • Post-processing Digital Radiography • Scan projection radiography...

find more resources at oneclass.com Week 10 Lecture 1 November 13th, 2018 • • • • Digital detectors Digital images Detectors Pre and postprocessing tools Objectives • Convolution • Kernel • Edge enhancement • Smoothing • WW • WL • Post-processing Digital Radiography • Scan projection radiography (CT) Overview of Direct Read System and Indirect Read System Indirect Read System • X-ray photons (analog) → light → electrons (digital) Direct read Systems • X-ray photons → electrons • Can use amorphous silicon or selenium Generic Design of Flat Panel Detectors find more resources at oneclass.com find more resources at oneclass.com • • • Amorphous selenium (Z=34) in direct read systems o More popular because of its atomic number Amorphous silicon (Z=14) is deposited in multiple layers on glass substrate in indirect read systems Conducting paths embedded within layers Indirect Flat Panel Detectors • Scintillating (x-ray to light) material is layered on the front surface of flat panel detector • Light produced from the screen will strike the flat panel detector Indirect Flat Panel Detector • Much of the light produced from the scintillating material has to travel a relatively large distance • This will cause more blurring • To improve this situation flat panel detectors may use CSI grown in “needles” Cesium Iodide Needle Crystals • Thick CSI layer has high QDE • Needles act as light pipes, channeling light toward photodiode • Minimal light spreading within phosphor CSI/a-SI • Csi • A-SI o Amorphous silicon • High photoelectric capture o This will create a better spatial resolution • Z = 55 and 53 Digital Images Indirect Detector Construction find more resources at oneclass.com find more resources at oneclass.com • • This is for indirect CCL → collects the light Detector • Detector elements could be called dexels • Pixels have to have an electronic area, light senesitvie area and detector area • The more light senitive areas the more useful the pixel is Fill Factor • High fill factor → areas that are taken up by other things (storage or electronics) are smaller • We want a high fill factor • Want a portion of the sensitive pixel to the incoming pixel → better spatial resolution, higher fill factor • 100% fill factor = great surface area sensitive to the incoming signal → less dose to the patient Digital Matrix • Voxel is the depth • Pixel is the picture element find more resources at oneclass.com find more resources at oneclass.com • • Once signal is captured math happens so that each area in the pixel is assigned a nubmer which is translated to gray scale which is bit depth A dexel is a grouping of pixels within the detector 10x10 matrix, contains 100 pixels Digital • Although the image receptors in DR are referred to as digital the initial stages of these devices produce an analog signal • The analog electronic signal form the PMT is digitized (sampled and quantization) • The signal from each pixel is an analog packet of electronic charge which is digitized by an ADC in the computer Image Processing • Matrix of values may be manipulated to emphasize visibility of certain components more than others • Some basic postprocessing options o Edge enhancements: exaggerates displayed contrasts where there is abrupt signal changes from one pixel to the next ▪ Makes it higher contrast (black and white) ▪ Used more on scaphoid and toes o Is effective for fractures and small high-contrast tissues o When in the lab double check if edge enhancement is defaulted for hand in the lab? Postprocessing • Edge enhancement accentuates edges in the image.. the default edge enhancement values should seldom need to be changes. • Default provides the best image possible but radiologist might change default when reading if looking for specific pathology … so tech would send hand image in lab with edge enhancement default in place find more resources at oneclass.com find more resources at oneclass.com • Helpful for a scaphoid fracture because the scaphoid has very unique blood supply and any fractures around the waist might not show up and cause death of the bone Edge Enhancement • Going to create abrupt edge • Enhance the contrast • Make things more spectacular and have a higher density • Boundaries look more distinct → will be able to see disruptions in cortical margins Edge Enhancement • Also called “high pass filtering” o Lets high spatial frequency content pass through the filter to the processed matrix • Alters magnitude of signal difference between two sides of edge o Increase high side o Decrease low side • Improves spatial frequency • It will make big edges • Allows high signal to go through o Deletes all but high o More to go through but will create more noise • It will amplify the signal difference (good and bad) Postprocessing – Smoothing Smoothing: reduced display contrast where there are abrupt signal changes from one pixel to the next • Low pass filtering occurs in smoothing o Deletes all but low pass frequency • The visibility of quantum mottle can be reduced by a spatial filtering operation called SMOOTHING • Reduces contrast (more gray) • This will take away noise and good signal Smoothing • Left is more pixilated • Smoothing makes something that is slightly underexposed more acceptable find more resources at oneclass.com find more resources at oneclass.com Postprocessing • Smoothing and edge enhancement are also part of convolution • Convolution: the science of manipulating digital images involving mathematical operations called convolution o Twist, coil, and distort it • using kernels, it involves shifting and adding gray scale values • pixel values gets information (mathematical orders) from kernels on an algorithm for post processing (edge enhancement has specific instructions, kernels are the instructions for how the pixel values will get manipulated in post processing Colonel (Sanders) not same as Kernel • post processing involves convolution • kernels are applied globally or a point situations to give orders to the pixels to convolute (change mathematically) • convolution through the kernels happens globally will happen to the whole image • specifically to certain areas is point changes o low pass filtering ▪ smoothing o high pass filtering ▪ edge enhancement Postprocessing • kernels will give instructions to the pixels to convolute with o edge enhancement o smoothing o EVP o Harmonization ▪ An image that has been smoothed with the use of a blurring kernel can be subtracted from the original image to produce a harmonized image o All of these are NOT global they are point processing, specific to a certain part of the image Postprocessing • In most spatial smoothing algorithms, each pixel value in the smoothed image is obtained by a weight averaging of the corresponding pixel in the unprocessed image with its neighbors. • Although smoothing reduced quantum mottle, it also blurs the image find more resources at oneclass.com find more resources at oneclass.com • Images must not be smoothed to the extent that clinically significant detail is lost • Look at the EI value to see if it is underexposed → if so you will have to do smoothing o It will lower the noise → good o But it will also lower the good stuff o It could deteriorate the image fully to make it not diagnostic Image is too noisy? (pixilated) Possible reasons and remedy 1. Image may require more exposure, check the EI 2. Cancel edge enhancement a. This will smooth without actually smoothing b. Making more gray scale 3. Turn off enhanced visualization processing (EVP) a. Increases latitudes without reducing contrast 4. The last thing we would do it smoothing Image Processing basics • Algorithm o Sequence of math and logic functions to achieve a specific goal • Goal for image processing: improve visibility of specific aspects of image • Examples o Increase visibility of edges by increases contrast o Counteract geometric unsharpness o Reduce conspicuity of quantum noise Smoothing • Also called “low pass filter” o Lets low spatial frequency content pass through the filter to the processed matrix • Reduces magnitude of differences from one pixel to the next • Masking – term applied to the filtering process to indicate which frequencies have been suppressed (image is transformed into frequencies) • Masking through filtration Filtering • By transforming the image into frequencies it can be mathematically altered • The computer either accentuates or suppresses selected frequencies during the filtering process • E.G. smoothing – deletes all but low frequencies • Edge Enhancement – deletes all but high frequencies Spatial Filtering find more resources at oneclass.com find more resources at oneclass.com Concept: alter each matrix value by performing math function (apply kernel) on it using neighbouring pixels’ values • Kernel o Mathematical instructions o Indicates set of values to be used in calculation Kernel for Edge Enhancment • Center value of mask is opposite sign compared to outside values • If point and its neighbours are identical, replacement value is zero (no edge) • The greater the difference between the point and its neighbours, the greater the absolute value of the processed point • The image shows that the middle is completely different than the middle • If they were all similar it would show smoothing Limitations of Simple Kernels • Simple edge enhancement increases noise as well as actual edges • Smoothing reduces visibility of edges as well as noise • Most images need some degree of both enhancement and smoothing • Note that hand in lab was defaulted to edge enhancement Windowing Concept: improve visibility of structures in specific subset of full range of signal values by placing just those subset values in the grayscale; change by + or – or x or / • Analogy o Looking out a window allows us to see a part of the whole outside o The location of the open window affects which part we see (first floor vs third floor) o The size of the opening of the window affects how much of the whole we see Windowing • Window level would be bright at top and very dark at the bottom o It is changed by + and = • Window width it changed by x or division o Can have black, gray or white Window Width • The range of signal values that will be displayed as shades of gray (instead of white or black) • The wider (larger) the width, the wider the range of values displayed between black and white • As WW increases, displayed difference between one value and the next becomes smaller find more resources at oneclass.com find more resources at oneclass.com Window Level (or Wndow Center) • The location of the center of the window • Indicates the signal that is assigned medium gray (the middle of the gray scale) Selection of WW and WL (WC) • Need to consider what structures need to be seen and what their signal values are • Select WLbased on the medium signal of those structures • Select WW based on the range of signals of those structures • Anything below bottom of window is black • Anything above the top of window is white find more resources at oneclass.com

Use Quizgecko on...
Browser
Browser