🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

M2-2 Technology CT Imaging.pdf

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Module 2-2 : X-ray Computed Tomography (CT) Imaging Technology 1 Overview of Technology In this module, we seek to understand the technology generally labeled as x-ray computed tomography (CT) imaging to look inside a patient. The term “tomography” refers to any method that produces images of single...

Module 2-2 : X-ray Computed Tomography (CT) Imaging Technology 1 Overview of Technology In this module, we seek to understand the technology generally labeled as x-ray computed tomography (CT) imaging to look inside a patient. The term “tomography” refers to any method that produces images of single tissue planes, image slices, of the patient. As shown in Figure 1 we use the core x-ray technology to look inside, to see slices of the structure and conditions of organs and bones as part of a health assessment. To make the GBIF specific to CT imaging, we need to specify the sensor and processing elements. The remaining elements are very similar to standard photography in that the system captures a picture, which can be enhanced or examined, and then displayed to the user as an image. The unique aspect of CT imaging is that a series of high energy x-rays are shown onto the target and the passage of the waves through the patient is measured. The figure below shows that x-rays have more energy than visible light or ultraviolet light, which goes to their ability to penetrate solid materials (we will talk more about waves when we get to the details of the technology). Note that the same method was also called “computed axial tomography” (CAT) in the past but the term CT is currently preferred. Figure 1: X-ray imaging makes unique uses of electromagnetic waves than are passed through or blocked by different materials in the body. Computed tomography describes the means of moving the x-ray equipment to produce slices of the biology, compared to the planar projections of x-ray alone. The x-ray of the human femur in Figure 2 is a projection onto the plane underneath the patient (basically the table) while the part of the CT shown represents a slice of that bone perpendicular to the table, a cross-section of the bone. The CT can provide the cross-section details while the x-ray simply adds together all the parts of the cross section to make a flat image. Version 10-1-2023 Figure 2: CT Scan used to create a 3D printed bone model. 2 Technology Overview 2.1 Reconstructing Object Images from Data A significant limitation of an x-ray projection onto the plane below is that depth information is lost as shown in Figure 3. Assuming parallel x-rays, three possible locations for the sphere are shown that produce the same projected circle onto the plane below. Note that we have already see that the beam of the x-ray is more cone shaped and that the rays are not exactly parallel; however, we will think of them as parallel as we appreciate the CT technology. The x-ray path is only affected by the cross section of the object from this view (a circle) and the density of the material. Multiple views of the same object will help compensate for this lost depth information. Version 10-1-2023 Figure 3: Recall that an x-ray image is a planar projection and thus depth information is lost. We can’t know which of the three sphere locations would produce the projection shown. Computed Tomography will use thin slices of the beam. In the current example, the CT slice only uses the thin slice of the planar projection outlined in red. As shown in Figure 4, the planar projection was a circle, here we only measure a thin slice of the circle which is a thin rectangle, with slightly curved ends. Figure 4: CT uses thin slices of the planar projections. A thin slice of the projected circle is a thin rectangle. Now, given the thin slice of the planar projection in Figure 4 above, what can we say about the object that we scanned? The only thing we can tell from the slice of the planar projection is that there is an Version 10-1-2023 object between the sensor and the beam source. The blue box in Figure 5 below shows ALL possible locations and shapes of an object that could create the x-ray slice. The blue box is called a “backprojection” as we are trying to use the information in the projection to recreate the subject. We don’t know the shape of the object, it doesn’t have to be a sphere since a properly oriented disk, cube, cylinder, … would produce the same x-ray projection. Even if we knew the shape of the object, in this case a sphere, we wouldn’t know the height of the object. The pessimistic answer to our starting question is that we don’t know much about the object. The optimistic answer is that using many of these slices will produce useful information. Figure 5: The blue box shows the possible locations and shapes of the object with the x-ray slice outlined in red. Now, if we use the same system to take a second image by rotating the scanner and sensor 90o (counterclockwise) in Figure 6. For this spherical object, the x-ray projection just looks the same as the original scan. Should we be disappointed the slice looks the same? Again, the blue box shows the possible locations and shapes of the object that created the image. Here is the subtlety, we know that the second image is taken at a different angle and that the blue box doesn’t have much information locating the object left to right, it can be any where along the length of the blue box. However, since the box is now the height of the object, we know a lot about the vertical location of the object. Comparing what we measured to the previous figure (Figure 4), we can see that the slice gives a lot of information about the left-to-right location and nothing about the height. Version 10-1-2023 Figure 6: The x-ray source and sensor are rotated 90o. The blue box shows the possible locations and shapes of the object with the x-ray slice outlined in red. Now with two images we have much more information. The two projections are shown together in Figure 7, where we can see that the object must lie in the intersection of the vertical blue plane and the horizontal blue plane! Together the two images and their back-projections (the blue boxes) locate the object to a much more limited region of the plane. We now know a lot about the location but not much about the object shape. Figure 7: Combining the two sources of information to narrow the region in the sectioning plane where the object could exist. Version 10-1-2023 The process of using the one-dimensional slices to recreate two-dimensional slice of the object in the scanning plane is called back projection. This is done mathematically but we can use graphics in Figure 8 to further illustrate the process. We can see that as the system uses more projections to rebuild the scan we get closer to the location and shape of the object. In the first panel on the left in Figure 8, we see what we already learned from the two projections, there is something located in the center of the image. In the second panel, we see in the brightest area of the center that our object maybe has eight sides. In the third panel, looking at the brightest area in the center of the image, 16 projections provide a good estimate of the cross-section of the sphere. Note the star shapes around the object will need to be removed with filtering. Figure 8: Back projections using 4 measurements, 8, and then 16. Each flat face of the black area presents where the projection was acquired. 2.2 Sinogram and Filtering How do we remove noise in an image. To understand, we will consider filters and then how they are applied to x-ray images. 2.2.1 Filtering 2.2.1.1 Filtering by Wavelength As shown in Figure 9, a filter is a device that acts on waves of different frequencies/wavelengths differently. It is designed to make a purposeful change to the output by reducing the magnitude of certain frequencies. Figure 9: A filter acts on the input according to the frequency of the input. Version 10-1-2023 There are several types of filters that are widely used. As shown in Figure 10, a high-pass filter reduces waves below a certain frequency and leaves unchanged the frequencies above a certain frequency. This is a general statement, and the filter is designed for the range of frequencies it passes and blocks. Just remember that that high-pass filter allows frequencies above a cut-off frequency to pass unaffected. The low-pass filter is the opposite. The low-pass filter allows frequencies below a cut-off frequency to pass unaffected. Figure 10: High-pass filter blocks low frequencies. Low-pass filter blocks high frequencies. Two additional types of filters can be created by using the idea of the high-pass and low-pass filters. As shown in Figure 11, a band-pass filter blocks frequencies below a lower cut-off frequency and blocks frequencies above a higher cut-off frequency. In this manner, a range of frequencies (a band) pass through the filter, everything outside the band is blocked. The opposite of the band-pass filter is the band-stop filter, which blocks a range (a band) of frequencies and lets all other frequencies pass. Version 10-1-2023 Figure 11:The band-pass filter allows a fixed range of frequencies through the filter, those above and below get blocked. The band-stop filter is just the opposite, it blocks frequencies in a certain range and then allows a higher and lower frequencies to pass through unchanged. As shown in Figure 12, light contains waves of different wavelengths/frequencies. White light contains all the wavelengths together. A colored lens is a band-pass filter that passes a range of frequencies around a specific color. For example, a blue filter passes “mostly” blue light, but also passes nearby frequencies. Figure 12: Visible light occurs as specific wavelengths. Cyan, Magenta, and Yellow lenses filtering white light. Note that the Cyan lens and Yellow lens together block all but green light. The three lenses together block all light and produce the dark spot in the middle. Example: Sun glasses are essentially filters for light that block out unwanted wavelengths, such as ultraviolet light. In this sense, the sunglasses are essentially a low-pass filter. Example: Light filtering from www.enchroma.com “Most types of color blindness occur when there is an excessive overlap of the M (green) and L (red) color cones in the eye, causing distinct hues to become indistinguishable. As a result, the number of shades of color a typical color blind person can see may be reduced by as much as 90%.” (https://enchroma.com/pages/how-enchroma-glasses-work) The solution they proposed to this problem is to block the wavelengths of light that cause this confusion. They design the Version 10-1-2023 glass in the lens to block a specific range of frequencies between red and green colors. In the simplest form, the glass in the lens acts as a band-stop filter to remove the unwanted frequencies from reaching the eye; thereby, restoring a sense of normal color vision. 2.2.1.2 Representing Signals as Waves As shown in Figure 13, any signal that we measure, for example brightness or voltage, can be built from a set of sinusoids (waves that look like a sine or cosine wave). If we have a signal, there is a mathematical process to determine how sinusoids of different frequencies and strength make up the signal. It is a complex process and there may be a large number of sinusoids to make up even a simple signal. Figure 13 shows the overall idea that the signals can be thought of as containing many frequencies and amplitudes. This is actually a very subtle point, we are not calling the measurement signal a wave, we are saying that we could think of that signal as being many sinusoids (which look like waves) added together. Figure 13: Any signal can be represented by a set of sinusoids (sine or cosine waves). An important take away from this fact is that sharp edges or spikes in the signal means that there are high-frequency waves present and the slower changing patterns indicate low frequency sinusoids are present. Figure 14: The sharp edges and spikes are removed by a low-pass filter. 2.2.1.3 Filtering Images We can now get to the idea that since an image is a measurement signal and any signal comprises sinusoids of different frequencies, we can think of representing an image with a set of sinusoids. Then, since we can filter any signal with a frequency specific filter, we can filter an image. We have thus Version 10-1-2023 arrived at an important tool for CT imaging – filtering images to remove noise or improve information clarity. An example of using low-pass and high-pass filters on an image is shown in Figure 15. On the left, a lowpass filter is applied to reveal seemly hidden information and on the right the information that is distracting to the pattern is removed with a high-pass filter. The low-pas filter removes the highfrequency dot pattern to reveal the watermark and the high-pass filter creates an image that better displays the dominant dot pattern. In Figure 16, the x-ray image in the middle is filtered by high- and low-pass filters. In the high-pass filtered image on the right the details of edges of bones are more clearly visible. In both examples, filtering the original images provides two new images that convey different information. Figure 15: In the middle is an image with small white dots. The dots are small, closely spaced, and very clearly separated from the black background. This type of pattern will have high-frequency components. Further examination of the original image shows some whiter areas that look like noise in the dot pattern. The image is processed with a high-pass filter on the right, notice that the white areas are removed, only the dot pattern remains. The image is processed with a low-pass filter on the left, the lowpass filter removes the dot pattern and reveals that the white areas are the watermark protecting the image. (iStock.com) Figure 16: Filtering of an x-ray of cat to reveal different details. 2.2.2 Sinogram In describing the computed tomography as the process of collecting slices of the cross section of the subject, a useful visualization of all of the slices displayed according to angle is called a “sinogram”. The process of creating the sinogram is shown in Figure 17. Each image is taken at an angle of the rotating xVersion 10-1-2023 ray source/detector, these angles are plotted on the horizontal axis. The slice is then inserted at each angle. The distance from the left edge of the sensor is the bottom of the slice and the top is the right edge of the sensor measurements. Note that every 180o will have the same image slice, however the image is flipped top-to-bottom because of the sensor orientation. Figure 17: Process of assembling the sinogram from measurements at each angle. A sinogram created from simulated x-rays of the subject at 1 degree angles is shown in Figure 18. Figure 18: Sinogram of simulated phantom, sinogram uses measurements taken every one degree. The sinogram can be used to make certain observations about the scan. For example, a discontinuity or sharp change in the sinogram indicates that the subject moved during the scan. The value of the sinogram is mainly to perform filtering before using back-projections to create an image of the subject. Version 10-1-2023 2.2.3 Example of Sinogram and Back-projection You will have to use your imagination here. Pretend that the logo of the bulldog in Figure 19 is a thin object, it has some depth so that the features would be visible in an x-ray taken from the sides as in a CT scan. The sinograms on the right side of Figure 19 show the series of projections at 10 o increments in the top image and at 2 o increments in the bottom image. Note that the sinogram at 10 o increments looks more pixelated (blocky) than the 2 o scan. This in fact illustrates that the 2o scan has much more information than the 10 o scan, the 2o scan has 360 o /(2 o per scan)=180 projections while the 10 o scan has 360 o /(10 o per scan)=36 projections. Figure 19: Scanning object. Sinograms shown for two different angles; Top sensor/detector rotated 10 o and Bottom sensor/detector rotated 2o. Back-projections can be performed using the projections in the sinograms of Figure 19. Two backprojections are shown in Figure 20 where we can easily recognize the original object from both backprojections, sampling at more angles, i.e. 2 o, produces a better reconstruction (estimate) of the original object. Figure 21 contains reconstructions from three scanning angles. Version 10-1-2023 Figure 20: Back-projection from slices to recreate original object. Reconstruction from Parallel Beam Projection with 18, 24, and 90 Projection Angles 50 100 150 200 250 300 350 400 200 400 600 800 1000 Figure 21: Comparison of different scanning angles. 2.3 Scanning We now have a good idea of how a slice is created: 1. Gather the projection data (x-ray images) at many angles. 2. Store the information in a sinogram. 3. Filter the sinogram. Version 10-1-2023 1200 4. Back project each slice of the sinogram to create an image of the the cross section of the subject. As illustrated in Figure 22, this process yields one of the many slices we need to create a 3D image of the subject. Figure 22: We have described a process to acquire data and create one of the slices we need to build a 3D image of the subject. Now, the final challenge is to repeat the process of creating a slice, many times. We have two options: we can move the subject relative to the rotating x-ray source/detector or we could move the rotating xray source/detector relative to the patient. Since it is much easier to move the subject by moving the table than moving all the x-ray equipment this is what is done. Now, moving the subject stopping to rotate the scanner completely around the patient is a slow process, and as it turns out unnecessary. In Figure 23 below, both the subject and sensor/detector are moved at the same time to create a helical path of the sensor relative to the subject. Note that this complicates the mathematics of slicing and back-projections but the basic process of generating slices to build a 3D image remains. Version 10-1-2023 Figure 23: Net effect of moving both the subject and the sensor/detector is a helical path of sensor/detector relative to subject. 3 Summary We can now put it all together as shown in Figure 24. Computed tomography overcomes the fundamental limitation that a single x-ray does not contain depth information. Depth information is synthesized by rotating the x-ray source and sensor about the subject. In the simplest description, a single rotation of the source/sensor is synthesized into a map of a cross-section of the subject using back-projections. The subject is moved to create a series of sections that represent a volume of the subject anatomy. Refinements to the technology include the fan-shaped back-projections to account for the actual beam shape, moving the sensor/detector together to create a helical path, multiple beam sources, and partial rotations of the sensor/detector. Figure 24: Final view of computed tomography measurements of a subject. Version 10-1-2023

Use Quizgecko on...
Browser
Browser