Vedant Dissertation.docx
Document Details
Uploaded by EuphoricGenius1856
Tags
Related
- Medical Imaging Basics PDF Notes
- Clinical Case Reports PDF: Parapharyngeal and Floor-of-Mouth Abscess Secondary to Tonsillitis - 2022 - Tailor
- SwinTRG: Swin Transformer Based Radiology Report Generation for Chest X-rays PDF
- Gallbladder Functions and Anatomy PDF
- Artificial Intelligence (AI) Fact Sheet PDF
- SwiftMR FAQ PDF
Full Transcript
Developing deep learning models to assist radiologists in accurately diagnosing medical conditions from imaging data, enhancing diagnostic speed and accuracy. **Introduction** ================ **Research Background** ----------------------- Radiology has been, over the past several decades, liter...
Developing deep learning models to assist radiologists in accurately diagnosing medical conditions from imaging data, enhancing diagnostic speed and accuracy. **Introduction** ================ **Research Background** ----------------------- Radiology has been, over the past several decades, literally the backbone of modern medicine. It served as the mainstay of diagnostic imaging and influenced patient care significantly in most medical specialities. Hosny et al. (2018) noted that nearly a century ago, the field of radiology surpassed recurrent changes, from the serendipitous discovery of X-rays to the current state-of-the-art imaging technologies such as computed tomography, magnetic resonance imaging, and positron emission tomography scans. Such innovations have revolutionized our ability to internally view body structures and non-invasively detect a range of pathologies that require a fundamental shift in medical diagnostics and treatment plans. Integration of artificial intelligence, predominantly deep learning techniques, has of late been a very promising way to meet all kinds of challenges that have still facing the radiology area for the last few years (Pianykh et al., 2020). The exponential growth in the amount of medical imaging data, together with an increase in diagnostic task complexity, has come not only with opportunities but also with challenges for radiologists. Imaged data is rich in information about human anatomy and pathology; it is also difficult to analyze and interpret. AI in radiology ushered in a new frontier for improving the accuracy of diagnoses and refining workflow efficiency, possibly reducing human error as well. Deep learning models represent a subset of techniques used by AI that have already demonstrated great potential in tasks such as image analysis, showing capabilities of noticing patterns and anomalies in radiological images at remarkable accuracy levels (Litjens et al., 2017). Such models could be trained on vast sets of medical images to learn complicated features and relations, which would probably help radiologists during their diagnostic process. However, there are not without challenges to the integration of AI into radiology practice. Model interpretability, generalizability across diverse patient populations, and ethical implications of AI-assisted diagnosis come into sharper focus and demand careful consideration (Thrall et al., 2018). Finally, the role of AI in radiology continues to evolve; questions remain as to how these technologies will affect the profession and patient care over the long term. **Research Motivation** ----------------------- This study has been conducted impelled by various problems faced within the radiology field. First, radiologists are under ever-increasing pressure since workload increases with the higher demand for making accurate diagnoses faster (Busardò et al., 2015). The high-pressure environment added to the sheer amount of imaging data that has to be analysed may thus result in human fatigue and hence comprise diagnostic accuracy. Brady (2016) further adds that human error and fatigue, in such work environments, can lead to errors of judgment, misdiagnosis, or overlooking of certain conditions---each having substantial impacts on quality measures for patient outcomes. The complexity of medical imaging data is significantly outgrowing working knowledge that could be available uniformly across all health settings. This leads to variability in quality diagnosis and care for the patient, mainly impacting underserved or remote areas. In this regard, AI-driven diagnostic tools would help bridge the gap by providing a layer of uniformity and expertise that will drive greater healthcare equity. It is because of the accelerated rate at which technology in medical imaging progresses that radiologists shall continually be upgrading their knowledge base and skills concerning new modalities and new techniques. This learning process compounds the workload on an already overworked workforce. In this regard, AI-assisted diagnosis would mitigate part of this pressure on radiologists by taking routine tasks and highlighting questionable cases for experts to channel their expertise where most needed. Finally, there is a growing realization toward the end that AI has a possible doppelganger potential to enhance diagnosis through the identification of subtle patterns or anomalies that might otherwise have been too subtle for the human eye to establish. While AI would not replace radiologists, it can become quite powerful as a tool in enhancing human expertise---eventually achieving earlier diagnosis and more personalization of treatment. **Research Aim** ---------------- This dissertation is primarily aimed at developing and evaluating a radiologist-assist machine learning AI model to help in the accurate diagnosis of a medical condition from the data. The research does not attempt to replace radiologists but designs a complementary tool to enhance their diagnostic capabilities and workflow efficiency. Specifically, the aim is to create an AI model that can: 1\. Accurately and efficiently analyze large volumes of medical imaging data. 2\. Identify patterns and anomalies that the human eye might easily miss. 3\. Provide consistent and reliable assistance across various imaging modalities and pathologies. 4\. Keep pace with a changing medical imaging technological and technical landscape. To achieve these aims, it is the purpose of the research to offer some help toward developing radiology practice for greater accuracy at work, reducing workload-related stress on radiologists, and attaining better care provided to patients. **Research Objectives** ----------------------- To achieve the overarching aim of this dissertation, the following specific objectives have been identified: 1. - - 2. - - - 3. - - 4. - - 5. - - - 6. - - By pursuing these objectives, the research aims to provide a comprehensive evaluation of the potential for AI-driven diagnosis in radiology, addressing both the technical challenges and the broader implications for healthcare practice. **Research Contribution** ------------------------- The present dissertation is aimed at making some noteworthy contributions within this developing sub-stream of AI-driven radiology. 1. 2. 3. 4. 5. In particular, contributions are encouraged that enhance either technical progress in the use of AI in radiology or our understanding of the broader implications for healthcare practice and health policy. **Dissertation Outline** ------------------------ The remainder of this dissertation is structured as follows: Chapter 2 Literature Review: This chapter gives a critical review of the literature available on the application of AI in radiology, from the historical development of AI techniques for medical images to the state-of-the-art models that are in existence at the moment and any research gaps identified. Moreover, it will also talk about the challenges and opportunities in the implementation of AI into clinical radiology practice. Chapter 3 Methodology: This chapter will explain the design of the research and the methods used in carrying out this study. In this regard, it is how data shall be collected, how the architecture of the proposed AI model is, and the evaluation metrics and protocols. It will therefore outline how an assessment of the impact on radiologists\' workflow and the methods for economic and ethical analyses has been laid down. Chapter 4 Model Development and Technical Evaluation: The AI model will be constructed using a technique ranging from data preprocessing to the model architecture and training procedures, which will be expounded in this chapter. The second part gives a detailed technical evaluation of how the model performed: accuracy metrics, cross-validation results, and comparisons with existing benchmarks. Chapter 5 Explainability and Ethical Considerations: This chapter will be focused on the challenge of AI explainability, where techniques are used to make model reasoning more interpretable. The authors will also deal with ethical concerns about using AI for medical diagnosis related to bias, privacy, and informed consent. Chapter 6 Discussion and Future Directions: This chapter will synthesise the findings from the previous chapters, discussing their implications for radiology as an area and healthcare more generally. Limitations of the present study will be identified and future directions set. Chapter 7 Conclusion: This final chapter summarizes the main findings and contributions of this dissertation, returning to some of the broad research aims and objectives. It will be lightly closed with a reflection on the future of AI in radiology and its probable potential to change medical practice. It is expected that this structured approach will help the dissertation in giving comprehensive coverage of issues relating to AI-driven diagnosis in radiology from technical development to practical implementation of broader implications for healthcare. **Literature Review** ===================== **1. Introduction** ------------------- Artificial Intelligence has become the forerunner of change in medical imaging and diagnosis. This Literature Review aims to present a comprehensive framework on the present status of AI-Driven diagnosis in radiology, in terms of its development, application, challenges, and further prospects. The principal topics included in this review are AI evolution in radiology, deep learning techniques, clinical applications, performance evaluation, integration into workflow, ethics, and future directions. **2. Evolution of AI in Radiology** ----------------------------------- ### **2.1 Historical Perspective** AI in radiology has been driven since the 1960s, initially by the first CAD systems designed for computer-aided diagnosis (Doi, 2007). The early systems were rule-based, relied on features manually engineered by humans, and were targeted for the detection of abnormalities in medical images. Although innovative systems for their period, they achieved only limited success because of their inability to deal with a high degree of complexity and variability. Lodwick et al. (1963) were pioneers in this field, and they developed one of the very early computer-based diagnostic systems for lung nodule detection on chest radiographs. Their works formed the background for further development in automated image analysis in radiology. ### **2.2 Transition to Machine Learning** In the 1990s and early 2000s, there was a move into machine-learning approaches in radiology. Support vector machines and random forests demonstrated better performance than rule-based systems in areas such as image processing according to El-Naqa et al. in 2002. They were still heavily dependent on human-engineered features, and high dimensionality remained an unsolved challenge for medical imaging data. Chan et al. (1995) showed that ANNs were greatly promising for mammography and able to uncover microcalcifications much more successfully than conventional CAD systems. This work foreshadowed the deep learning revolution that would follow. ### **2.3 The Deep Learning Revolution** It was after the advent of deep learning, especially convolutional neural networks, that AI applications in radiology were paradigmatically shifted. Krizhevsky et al. (2012) could demonstrate just how powerful deep CNNs are in tasks such as image classification and fast-finding applications in medical imaging. Gulshan et al. (2016) applied deep learning to the detection of diabetic retinopathy, achieving performance comparable to that of human experts. This was a proof-of-concept study that showed AI\'s potential assistance in complex diagnosis, eliciting further interest in applying deep learning to several radiological applications. **3. Deep Learning Techniques in Radiology** -------------------------------------------- ### **3.1 Convolutional Neural Networks (CNNs)** CNNs have become cornerstones of AI applications to radiology due to their additional capacity for the automatic learning of hierarchical features from image data. Litjens et al. (2017) provide a comprehensive survey on deep learning techniques in medical image analysis, underscoring how all successes relative to classification, detection, and segmentation tasks have been at the core of CNNs. He et al. (2016) proposed residual networks, or ResNets, that enabled the training of much deeper neural networks. Now, this architecture can be applied to a wide range of medical image tasks and allows more complicated feature learning with improved performance. ### **3.2 Transfer Learning** Transfer learning has been particularly useful in medical imaging, where large annotated datasets are many times scarce. Shin et al. (2016) demonstrated how transfer learning worked for thoraco-abdominal lymph node detection and interstitial lung disease classification by showing that networks pre-trained on natural images can be fine-tuned successfully for medical image analyses. ### **3.3 Generative Adversarial Networks (GANs)** Applications of GANs in the field of medical imaging have been related to image synthesis, augmentation, and domain adaptation. Wolterink et al. applied cycle-consistent GANs in 2017 to unpaired image-to-image translation from CT to MRI images, demonstrating the potential of GANs in cross-modality synthesis. ### **3.4 Attention Mechanisms and Transformers** Attention mechanisms or even transformer architectures have been introduced in medical imaging within the past few years. Attention mechanisms guide models to relevant parts of the image, hence improving both interpretability and performance. Attention gates were proposed for medical image segmentation by Oktay et al. (2018), which can function better and be more interoperable. Among them, transformer architectures, which were initially developed for tasks of natural language processing, have been adapted for medical imaging tasks. Dosovitskiy et al. proposed in 2021 a Vision Transformer, ViT, that, when applied to several medical image tasks, produced very promising results. **4. Clinical Applications of AI in Radiology** ----------------------------------------------- ### **4.1 Chest Radiography** Chest radiography is one of the most frequently performed imaging examinations and has been a prime target for AI applications. Rajpurkar et al. (2017) developed CheXNet, a deep-learning algorithm that could identify pneumonia from chest X-rays with a performance exceeding that of practicing radiologists. Since then, this work has shown how AI could help with many everyday diagnoses. Hwang et al. (2019) developed a deep-learning algorithm for the detection of multiple abnormalities in chest radiographs, including pulmonary malignancies, pneumothorax, and tuberculosis. Results yielded high sensitivity and specificity for multiple conditions, proving AI\'s potential to become an all-in-one screening. ### **4.2 Mammography** Another of the core application areas in radiology has been mammography. McKinney et al. (2020) developed an AI system with diagnostic performance superior to human experts for breast cancer screening, reducing both false positives and false negatives. The results of this study indicated that there is strong potential for AI to enhance the efficiency and accuracy of breast cancer screening programs. Wu et al. (2019) developed a deep learning model for breast density classification, one of the important risk factors for breast cancer. Their model demonstrated high concordance with assessments by radiologists, showing potential use for AI in risk stratification. ### **4.3 Neuroimaging** It has already been adequately proved that AI, in the case of neuroimaging, is useful in detecting tumours, making diagnoses of strokes, and evaluating neurodegenerative diseases. Kamnitsas et al. (2017) provided very good results in a state-of-the-art performance test by creating a brain tumour segmentation 3D CNN on multi-modal MRI data. Chilamkurthy et al. (2018) developed and validated deep learning algorithms for the detection of intracranial haemorrhage and its subtypes on head CT scans. Their system has been very sensitive and specific to this task, showing that AI has the potential to assist in time-critical diagnosis, as occurred in acute stroke care. ### **4.4 Abdominal Imaging** Abdominal Imaging AI has been applied to tasks such as liver lesion detection and characterization in abdominal imaging. Yasaka et al. (2018) built a deep learning model to distinguish liver masses on dynamic CT images, performing comparably to radiologists. Akkus et al. (2017) used deep learning for the segmentation of gliomas in multi-modal MRI and found that due to high accuracy, it was possible to map out treatment planning and monitoring. **5. Performance Evaluation and Validation** -------------------------------------------- ### **5.1 Metrics and Benchmarks** To be able to do this several appropriate metrics should be used, and several benchmarks are required for the evaluation of AI models in radiology. Topol (2019), in his discussion, suggested that clinically relevant measures would admit a great tendency towards comparing AI performance relative to human performance. Commonly applied metrics include sensitivity and specificity, while model performances are mostly gauged by AUC-ROC; for segmentation tasks, the Dice coefficient is even used. Kaissis et al. (2020) provide a comprehensive review of performance metrics in medical imaging AI, where it is emphasized that metrics should be chosen relevant only to clinical objectives and processes of decision-making. ### **5.2 External Validation** Part of the reason why it is necessary to establish the generalizability of AI models in radiology is their external validation on diverse datasets. Shortly, Zech et al. (2018) revealed that deep learning models can leverage potential confounders in medical imaging data sets, thus further providing an argument for rigorous external validation. Yao et al. (2021), in a systematic review of AI studies in medical imaging, revealed that most did not include external validation. Some potential shortcomings in developing AI models might result in overestimation of their performance or generalizability. ### **5.3 Comparison with Human Performance** Important aspects of this validation are the comparisons of AI performance to human experts. Liu et al. (2019), in their survey and meta-analysis on the performance of deep learning against healthcare experts or professionals for image analysis tasks, found that AI was able to match healthcare professionals in most tasks. However, Nagendran et al. (2020) take a more cautious view of AI performance because of the methodological limitations applied in most studies that compare AI with human experts, emphasizing a need for a more rigorous evaluation protocol. **6. Ethical and Legal Considerations** --------------------------------------- ### **6.1 Privacy and Data Protection** AI development and deployment in radiology raise several important privacy and data protection concerns. Kaissis et al. (2020) indicated that disseminating data without violating personal privacy is one of the true challenges to medical imaging AI techniques, for which federated learning can be a potential solution. According to Cohen et al. (2018), large-scale medical imaging datasets are creations for the development of AI bound to always be besieged by great ethical concerns, and therefore the establishment of robust governance frameworks alongside patient consent processes is of the essence in these facilities. ### **6.2 Bias and Fairness** Bias reduction and fairness in AI models is an important element that goes into the ethical deployment of these models within healthcare. Larrazabal et al. (2020) pointed out how deep learning models are being used to copy and enhance biases in a training dataset, therefore diversification and representation are important when building datasets. Gichoya et al. (2022) consider mitigations of bias in medical imaging AI, including mechanisms to ensure diverse data collection, careful model evaluation, and ongoing monitoring for fairness. ### **6.3 Explainability and Interpretability** The \"black box\" nature of many deep-learning models makes them uninviting for acceptance and use in clinical practice. Reyes et al. (2020) review several techniques aimed at improving the explainability of AI models in medical imaging, covering saliency maps, class activation mapping, and concept-based explanations. Amann et al. (2020) noticed that \"black box\" AI systems can have legal and moral repercussions concerning healthcare; therefore, the interpretability of AI models counts much towards staying accountable and ensuring trust from patients. **7. Future Directions** ------------------------ ### **7.1 Multimodal and Federated Learning** Future developments in AI-driven radiology will be driven by the potential of multiple imaging modalities and clinical data sources. Huang et al. (2020) point out in their work that the multimodal deep learning potential within medical imaging is to be harnessed for performance enhancement and increased robustness. Federated learning is a method that solves the current data privacy concerns by allowing model training on decentralized data, and it has garnered much attention. Sheller et al. (2020) demonstrated that federated learning allows brain tumour segmentation without loss of performance compared to a centralized approach while maintaining data privacy. ### **7.2 Continual Learning and Adaptation** It should be the objective of future research in this direction to develop AI models that are capable of learning continuously and making adaptations with the introduction of new data. Chen et al. (2018) introduce a continual learning framework for medical image classification, where the challenge is model adaptation within dynamic hospital clinical environments. ### **7.3 Integration with Other Technologies** It was only when AI was combined with other emerging technologies that it would truly achieve highly individualized and accurate diagnoses, such as molecular imaging and radiomics. Nensa et al. (2019) remark on the application of AI together with radiomics for the construction of more correct characterizations or definitions of disease processes and predictive models for treatment responses. ### **7.4 AI in Image Reconstruction and Acquisition** The application of AI is also being extended to processes of image reconstruction and acquisition. Liang et al. (2020) review deep learning approaches for medical image reconstruction, highlighting potential improvements in image quality and reduction in acquisition times. **8. Conclusion** ----------------- This literature review has taken an all-inclusive viewpoint on the current state of AI-driven diagnosis in radiology, from its historical underpinnings in computer-aided diagnosis to the deep learning revolution of today. The potential exists for AI to hugely improve radiological practice, with clinical applications accounting for a wide range of imaging modalities and diagnostic tasks, many of which even performed better than or equalled human experts. Yet, important difficulties that are well articulated in this framework concern the evaluation, clinical integration, and ethics of AI systems in radiology. Generalizability, fairness, and interpretability are just a few---key conditions that must be met if AI models are finally to be translated into routine clinical practice. Some of these challenges have already been under attack by developments in multimodal learning, federated learning, and continual adaptation, and could further advance AI\'s capabilities in radiology. As the field keeps rapidly evolving, ongoing research and close collaboration between AI developers and clinical experts are met by considerations of ethics and regulatory issues in an equable proportion to realize the full potential of AI-driven diagnosis in radiology. **References** ============== **Introduction** ---------------- Brady, A.P., 2017. Error and discrepancy in radiology: inevitable or avoidable?. Insights into imaging, 8, pp.171-182. Busardò, F.P., Frati, P., Santurro, A., Zaami, S. and Fineschi, V., 2015. Errors and malpractice lawsuits in radiology: what the radiologist needs to know. La radiologia medica, 120, pp.779-784. Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L.H. and Aerts, H.J., 2018. Artificial intelligence in radiology. Nature Reviews Cancer, 18(8), pp.500-510. Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B. and Sánchez, C.I., 2017. A survey on deep learning in medical image analysis. Medical image analysis, 42, pp.60-88. Pianykh, O.S., Langs, G., Dewey, M., Enzmann, D.R., Herold, C.J., Schoenberg, S.O. and Brink, J.A., 2020. Continuous learning AI in radiology: implementation principles and early applications. Radiology, 297(1), pp.6-14. Thrall, J.H., Li, X., Li, Q., Cruz, C., Do, S., Dreyer, K. and Brink, J., 2018. Artificial intelligence and machine learning in radiology: opportunities, challenges, pitfalls, and criteria for success. Journal of the American College of Radiology, 15(3), pp.504-508. Literature Review ----------------- Akkus, Z., Galimzianova, A., Hoogi, A., Rubin, D.L. and Erickson, B.J., 2017. Deep learning for brain MRI segmentation: state of the art and future directions. Journal of digital imaging, 30, pp.449-459. Amann, J., Blasimme, A., Vayena, E., Frey, D., Madai, V.I. and Precise4Q Consortium, 2020. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC medical informatics and decision making, 20, pp.1-9. Chan, H.P., Lo, S.C.B., Sahiner, B., Lam, K.L. and Helvie, M.A., 1995. Computer‐aided detection of mammographic microcalcifications: Pattern recognition with an artificial neural network. Medical physics, 22(10), pp.1555-1567. Chilamkurthy, S., Ghosh, R., Tanamala, S., Biviji, M., Campeau, N.G., Venugopal, V.K., Mahajan, V., Rao, P. and Warier, P., 2018. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. The Lancet, 392(10162), pp.2388-2396. Doi, K., 2007. Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Computerized medical imaging and graphics, 31(4-5), pp.198-211. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S. and Uszkoreit, J., 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. El-Naqa, I., Yang, Y., Wernick, M.N., Galatsanos, N.P. and Nishikawa, R.M., 2002. A support vector machine approach for detection of microcalcifications. IEEE transactions on medical imaging, 21(12), pp.1552-1563. Gichoya, J.W., Banerjee, I., Bhimireddy, A.R., Burns, J.L., Celi, L.A., Chen, L.C., Correa, R., Dullerud, N., Ghassemi, M., Huang, S.C. and Kuo, P.C., 2022. AI recognition of patient race in medical imaging: a modelling study. The Lancet Digital Health, 4(6), pp.e406-e414. Gulshan, V., Peng, L., Coram, M., Stumpe, M.C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J. and Kim, R., 2016. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. jama, 316(22), pp.2402-2410. He, K., Zhang, X., Ren, S. and Sun, J., 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). Huang, S.C., Kothari, T., Banerjee, I., Chute, C., Ball, R.L., Borus, N., Huang, A., Patel, B.N., Rajpurkar, P., Irvin, J. and Dunnmon, J., 2020. PENet---a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric CT imaging. NPJ digital medicine, 3(1), p.61. Hwang, E.J., Park, S., Jin, K.N., Im Kim, J., Choi, S.Y., Lee, J.H., Goo, J.M., Aum, J., Yim, J.J., Cohen, J.G. and Ferretti, G.R., 2019. Development and validation of a deep learning--based automated detection algorithm for major thoracic diseases on chest radiographs. JAMA network open, 2(3), pp.e191095-e191095. Kaissis, G.A., Makowski, M.R., Rückert, D. and Braren, R.F., 2020. Secure, privacy-preserving and federated machine learning in medical imaging. Nature Machine Intelligence, 2(6), pp.305-311. Kamnitsas, K., Ledig, C., Newcombe, V.F., Simpson, J.P., Kane, A.D., Menon, D.K., Rueckert, D. and Glocker, B., 2017. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Medical image analysis, 36, pp.61-78. Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25. Larrazabal, A.J., Nieto, N., Peterson, V., Milone, D.H. and Ferrante, E., 2020. Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proceedings of the National Academy of Sciences, 117(23), pp.12592-12594. Liang, D., Cheng, J., Ke, Z. and Ying, L., 2020. Deep magnetic resonance image reconstruction: Inverse problems meet neural networks. IEEE Signal Processing Magazine, 37(1), pp.141-151. Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., Van Der Laak, J.A., Van Ginneken, B. and Sánchez, C.I., 2017. A survey on deep learning in medical image analysis. Medical image analysis, 42, pp.60-88. Liu, X., Faes, L., Kale, A.U., Wagner, S.K., Fu, D.J., Bruynseels, A., Mahendiran, T., Moraes, G., Shamdas, M., Kern, C. and Ledsam, J.R., 2019. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The lancet digital health, 1(6), pp.e271-e297. Lodwick, G.S., Keats, T.E. and Dorst, J.P., 1963. The coding of roentgen images for computer analysis as applied to lung cancer. Radiology, 81(2), pp.185-200. McKinney, S.M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M., Corrado, G.S., Darzi, A. and Etemadi, M., 2020. International evaluation of an AI system for breast cancer screening. Nature, 577(7788), pp.89-94. Nagendran, M., Chen, Y., Lovejoy, C.A., Gordon, A.C., Komorowski, M., Harvey, H., Topol, E.J., Ioannidis, J.P., Collins, G.S. and Maruthappu, M., 2020. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. bmj, 368. Nensa, F., Demircioglu, A. and Rischpler, C., 2019. Artificial intelligence in nuclear medicine. Journal of Nuclear Medicine, 60(Supplement 2), pp.29S-37S. Oktay, O., Schlemper, J., Folgoc, L.L., Lee, M., Heinrich, M., Misawa, K., Mori, K., McDonagh, S., Hammerla, N.Y., Kainz, B. and Glocker, B., 2018. Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999. Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C., Shpanskaya, K. and Lungren, M.P., 2017. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225. Reyes, M., Meier, R., Pereira, S., Silva, C.A., Dahlweid, F.M., Tengg-Kobligk, H.V., Summers, R.M. and Wiest, R., 2020. On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiology: artificial intelligence, 2(3), p.e190043. Sheller, M.J., Edwards, B., Reina, G.A., Martin, J., Pati, S., Kotrotsou, A., Milchenko, M., Xu, W., Marcus, D., Colen, R.R. and Bakas, S., 2020. Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Scientific reports, 10(1), p.12598. Shin, H.C., Roth, H.R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D. and Summers, R.M., 2016. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Transactions on Medical Imaging, 35(5), pp.1285-1298. Topol, E.J., 2019. High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), pp.44-56. Wolterink, J.M., Dinkla, A.M., Savenije, M.H., Seevinck, P.R., van den Berg, C.A. and Išgum, I., 2017. Deep MR to CT synthesis using unpaired data. In Simulation and Synthesis in Medical Imaging: Second International Workshop, SASHIMI 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 10, 2017, Proceedings 2 (pp. 14-23). Springer International Publishing. Wu, N., Phang, J., Park, J., Shen, Y., Huang, Z., Zorin, M., Jastrzębski, S., Févry, T., Katsnelson, J., Kim, E. and Wolfson, S., 2019. Deep neural networks improve radiologists' performance in breast cancer screening. IEEE transactions on medical imaging, 39(4), pp.1184-1194. Yao, A.D., Cheng, D.L., Pan, I. and Kitamura, F., 2020. Deep learning in neuroradiology: a systematic review of current algorithms and approaches for the new wave of imaging technology. Radiology: Artificial Intelligence, 2(2), p.e190026. Yasaka, K., Akai, H., Abe, O. and Kiryu, S., 2018. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: a preliminary study. Radiology, 286(3), pp.887-896. Zech, J.R., Badgeley, M.A., Liu, M., Costa, A.B., Titano, J.J. and Oermann, E.K., 2018. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS medicine, 15(11), p.e1002683.