DOI QR코드

DOI QR Code

Updated Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging for Medical Professionals

  • Kiduk Kim (Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center) ;
  • Kyungjin Cho (Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine) ;
  • Ryoungwoo Jang (Coreline Soft Co., Ltd.) ;
  • Sunggu Kyung (Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine) ;
  • Soyoung Lee (Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine) ;
  • Sungwon Ham (Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine) ;
  • Edward Choi (Korea Advanced Institute of Science and Technology) ;
  • Gil-Sun Hong (Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center) ;
  • Namkug Kim (Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center)
  • Received : 2023.08.28
  • Accepted : 2023.12.28
  • Published : 2024.03.01

Abstract

The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.

Keywords

Acknowledgement

This research was supported by grants from the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (HI21C1148 and HI22C1723).

References

  1. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets [accessed on August 28, 2023]. Available at: https://papers.nips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf 
  2. Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks [accessed on August 28, 2023]. Available at: https://openaccess.thecvf.com/content_CVPR_2019/papers/Karras_A_Style-Based_Generator_Architecture_for_Generative_Adversarial_Networks_CVPR_2019_paper.pdf 
  3. Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models [accessed on August 28, 2023]. Available at: https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf 
  4. Song J, Meng C, Ermon S. Denoising diffusion implicit models. arXiv [Preprint]. 2020 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2010.02502 
  5. Song Y, Sohl-Dickstein J, Kingma DP, Kumar A, Ermon S, Poole B. Score-based generative modeling through stochastic differential equations. arXiv [Preprint]. 2020 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2011.13456 
  6. Radford A, Narasimhan K, Salimans T, Sutskever I. Improving language understanding by generative pre-training [accessed on August 28, 2023]. Available at: https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf 
  7. Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I. Language models are unsupervised multitask learners [accessed on August 28, 2023]. Available at: https://insightcivic.s3.us-east-1.amazonaws.com/language-models.pdf 
  8. Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, et al. Language models are few-shot learners [accessed on August 28, 2023]. Available at: https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf 
  9. Ouyang L, Wu J, Jiang X, Almeida D, Wainwright C, Mishkin P, et al. Training language models to follow instructions with human feedback [accessed on August 28, 2023]. Available at: https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf 
  10. Devlin J, Chang MW, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. arXiv [Preprint]. 2018 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.1810.04805 
  11. Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, et al. Overcoming the challenges in the development and implementation of artificial intelligence in radiology: a comprehensive review of solutions beyond supervised learning. Korean J Radiol 2023;24:1061-1080 
  12. Kingma DP, Welling M. Auto-encoding variational bayes. arXiv [Preprint]. 2013 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.1312.6114 
  13. Karras T, Aittala M, Hellsten J, Laine S, Lehtinen J, Aila T. Training generative adversarial networks with limited data [accessed on August 28, 2023]. Available at: https://papers.nips.cc/paper/2020/file/8d30aa96e72440759f74bd2306c1fa3d-Paper.pdf 
  14. Chowdhery A, Narang S, Devlin J, Bosma M, Mishra G, Roberts A, et al. PaLM: scaling language modeling with pathways. arXiv [Preprint]. 2022 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2204.02311 
  15. Touvron H, Lavril T, Izacard G, Martinet X, Lachaux MA, Lacroix T, et al. LLaMA: open and efficient foundation language models. arXiv [Preprint]. 2023 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2302.13971 
  16. OpenAI. GPT-4 technical report. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2303.08774 
  17. Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, et al. Zero-shot text-to-image generation [accessed on November 28, 2023]. Available at: https://proceedings.mlr.press/v139/ramesh21a.html 
  18. Ramesh A, Dhariwal P, Nichol A, Chu C, Chen M. Hierarchical text-conditional image generation with CLIP latents. arXiv [Preprint]. 2022 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2204.06125 
  19. Liu H, Li C, Wu Q, Lee YJ. Visual instruction tuning. arXiv [Preprint]. 2023 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2304.08485 
  20. Kramer MA. Autoassociative neural networks. Comput Chem Eng 1992;16:313-328 
  21. Sohn K, Lee H, Yan X. Learning structured output representation using deep conditional generative models [accessed on November 28, 2023]. Available at: https://papers.nips.cc/paper_files/paper/2015/hash/8d55a249e6baa5c06772297520da2051-Abstract.html 
  22. Ivanov O, Figurnov M, Vetrov D. Variational autoencoder with arbitrary conditioning. arXiv [Preprint]. 2018 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.1806.02382 
  23. Chen RT, Li X, Grosse R, Duvenaud DK. Isolating sources of disentanglement in VAEs [accessed on November 28, 2023]. Available at: https://proceedings.neurips.cc/paper_files/paper/2018/file/1ee3dfcd8a0645a25a35977997223d22-Paper.pdf 
  24. Vahdat A, Kautz J. NVAE: a deep hierarchical variational autoencoder [accessed on November 28, 2023]. Available at: https://proceedings.neurips.cc/paper/2020/file/e3b21256183cf7c2c7a66be163579d37-Paper.pdf 
  25. Gregor K, Danihelka I, Graves A, Rezende D, Wierstra D. DRAW: a recurrent neural network for image generation [accessed on November 28, 2023]. Available at: https://proceedings.mlr.press/v37/gregor15.pdf 
  26. Chung J, Kastner K, Dinh L, Goel K, Courville AC, Bengio Y. A recurrent latent variable model for sequential data [accessed on November 28, 2023]. Available at: https://proceedings.neurips.cc/paper/2015/hash/b618c3210e934362ac261db280128c22-Abstract.html 
  27. Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv [Preprint]. 2015 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.1511.06434 
  28. Mirza M, Osindero S. Conditional generative adversarial nets. arXiv [Preprint]. 2014 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.1411.1784 
  29. Karras T, Aila T, Laine S, Lehtinen J. Progressive growing of GANs for improved quality, stability, and variation. arXiv [Preprint]. 2017 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.1710.10196 
  30. Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T. Analyzing and improving the image quality of StyleGAN [accessed on August 28, 2023]. Available at: https://openaccess.thecvf.com/content_CVPR_2020/papers/Karras_Analyzing_and_Improving_the_Image_Quality_of_StyleGAN_CVPR_2020_paper.pdf 
  31. Karras T, Aittala M, Laine S, Harkonen E, Hellsten J, Lehtinen J, et al. Alias-free generative adversarial networks [accessed on August 28, 2023]. Available at: https://proceedings.neurips.cc/paper_files/paper/2021/file/076ccd93ad68be51f23707988e934906-Paper.pdf 
  32. Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks [accessed on August 28, 2023]. Available at: https://openaccess.thecvf.com/content_ICCV_2017/papers/Zhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.pdf 
  33. Gravina M, Marrone S, Docimo L, Santini M, Fiorelli A, Parmeggiani D, et al. Leveraging CycleGAN in lung CT sinogram-free kernel conversion. Proceedings of the 21st International Conference on Image Analysis and Processing-ICIAP 2022; 2022 May 23-27; Lecce, Italy; Cham: Springer International Publishing, 2022:100-110 
  34. Yang S, Kim EY, Ye JC. Continuous conversion of CT kernel using switchable CycleGAN with AdaIN. IEEE Trans Med Imaging 2021;40:3015-3029 
  35. Tang C, Li J, Wang L, Li Z, Jiang L, Cai A, et al. Unpaired low-dose CT denoising network based on cycle-consistent generative adversarial network with prior image information. Comput Math Methods Med 2019;2019:8639825 
  36. Kwon T, Ye JC. Cycle-free CycleGAN using invertible generator for unsupervised low-dose ct denoising. IEEE Trans Comput Imaging 2021;7:1354-1368 
  37. Gu J, Ye JC. AdaIN-based tunable CycleGAN for efficient unsupervised low-dose CT denoising. IEEE Trans Comput Imaging 2021;7:73-85 
  38. Yan C, Lin J, Li H, Xu J, Zhang T, Chen H, et al. Cycle-consistent generative adversarial network: effect on radiation dose reduction and image quality improvement in ultralow-dose CT for evaluation of pulmonary tuberculosis. Korean J Radiol 2021;22:983-993 
  39. Choi Y, Choi M, Kim M, Ha JW, Kim S, Choo J. StarGAN: unified generative adversarial networks for multi-domain image-to-image translation [accessed on August 28, 2023]. Available at: https://openaccess.thecvf.com/content_cvpr_2018/html/Choi_StarGAN_Unified_Generative_CVPR_2018_paper.html 
  40. Sohail M, Riaz MN, Wu J, Long C, Li S. Unpaired multicontrast MR image synthesis using generative adversarial networks. Proceedings of the 4th International Workshop on Simulation and Synthesis in Medical Imaging-SASHIMI 2019; 2019 Oct 13; Shenzhen, China; Cham: Springer International Publishing, 2019:22-31 
  41. Liao Z, Jafari MH, Girgis H, Gin K, Rohling R, Abolmaesumi P, et al. Echocardiography view classification using quality transfer star generative adversarial networks. Proceedings of 22nd International Conference on Medical Image Computing and Computer Assisted Intervention-MICCAI 2019; 2019 Oct 13-17; Shenzhen, China; Cham: Springer International Publishing, 2019:687-695 
  42. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al. Attention is all you need [accessed on August 28, 2023]. Available at: https://proceedings.neurips.cc/paper_files/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html 
  43. Yang Z, Xu X, Yao B, Zhang S, Rogers E, Intille S, et al. Talk2Care: facilitating asynchronous patient-provider communication with large-language-model. arXiv [Preprint]. 2023 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2309.09357 
  44. Kahambing JG. ChatGPT, public health communication and 'intelligent patient companionship.' J Public Health (Oxf) 2023;45:e590 
  45. Jeblick K, Schachtner B, Dexl J, Mittermeier A, Stuber AT, Topalis J, et al. ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports. Eur Radiol 2023 Oct 5 [Epub]. https://doi.org/10.1007/s00330-023-10213-1 
  46. Uppal S, Bhagat S, Hazarika D, Majumder N, Poria S, Zimmermann R, et al. Multimodal research in vision and language: a review of current and emerging trends. Inf Fusion 2022;77:149-171 
  47. Antol S, Agrawal A, Lu J, Mitchell M, Batra D, Zitnick CL, et al. VQA: visual question answering [accessed on August 28, 2023]. Available at: https://openaccess.thecvf.com/content_iccv_2015/html/Antol_VQA_Visual_Question_ICCV_2015_paper.html 
  48. Zellers R, Bisk Y, Farhadi A, Choi Y. From recognition to cognition: visual commonsense reasoning [accessed on August 28, 2023]. Available at: https://openaccess.thecvf.com/content_CVPR_2019/papers/Zellers_From_Recognition_to_Cognition_Visual_Commonsense_Reasoning_CVPR_2019_paper.pdf 
  49. Hossain MZ, Sohel F, Shiratuddin MF, Laga H. A comprehensive survey of deep learning for image captioning. ACM Comput Surv 2019;51:1-36 
  50. de Rosa GH, Papa JP. A survey on text generation using generative adversarial networks. Pattern Recognit 2021;119:108098 
  51. Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, et al. Learning transferable visual models from natural language supervision [accessed on August 28, 2023]. Available at: https://proceedings.mlr.press/v139/radford21a/radford21a.pdf 
  52. Jia C, Yang Y, Xia Y, Chen YT, Parekh Z, Pham H, et al. Scaling up visual and vision-language representation learning with noisy text supervision [accessed on August 28, 2023]. Available at: http://proceedings.mlr.press/v139/jia21b.html 
  53. Chen X, Ma L, Chen J, Jie Z, Liu W, Luo J. Real-time referring expression comprehension by single-stage grounding network. arXiv [Preprint]. 2018 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.1812.03426 
  54. Nagaraja VK, Morariu VI, Davis LS. Modeling context between objects for referring expression understanding. Proceedings of 14th European Conference on Computer Vision-ECCV 2016; 2016 October 11-14; Amsterdam, The Netherlands; Cham: Springer International Publishing, 2016:792-807 
  55. Cao M, Li S, Li J, Nie L, Zhang M. Image-text retrieval: a survey on recent research and development. arXiv [Preprint]. 2022 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2203.14713 
  56. Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, et al. Segment anything. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2304.02643 
  57. Zhang K, Liu D. Customized segment anything model for medical image segmentation. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2304.13785 
  58. Zhang Y, Jiao R. Towards segment anything model (SAM) for medical image segmentation: a survey. arXiv [Preprint]. 2023 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2305.03678 
  59. Ma J, He Y, Li F, Han L, You C, Wang B. Segment anything in medical images. Nature Communications 2024;15:654 
  60. Mazurowski MA, Dong H, Gu H, Yang J, Konz N, Zhang Y. Segment anything model for medical image analysis: an experimental study. Med Image Anal 2023;89:102918 
  61. Hu C, Li X. When SAM meets medical images: an investigation of segment anything model (SAM) on multi-phase liver tumor segmentation. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2304.08506 
  62. Jung KH. Uncover this tech term: foundation model. Korean J Radiol 2023;24:1038-1041 
  63. Alayrac JB, Donahue J, Luc P, Miech A, Barr I, Hasson Y, et al. Flamingo: a visual language model for few-shot learning [accessed on August 28, 2023]. Available at: https://proceedings.neurips.cc/paper_files/paper/2022/file/960a172bc7fbf0177ccccbb411a7d800-Paper-Conference.pdf 
  64. Yuan L, Chen D, Chen YL, Codella N, Dai X, Gao J, et al. Florence: a new foundation model for computer vision. arXiv [Preprint]. 2021 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2111.11432 
  65. Li C, Wong C, Zhang S, Usuyama N, Liu H, Yang J, et al. LLaVA-Med: training a large language-and-vision assistant for biomedicine in one day. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2306.00890 
  66. Singhal K, Azizi S, Tu T, Mahdavi SS, Wei J, Chung HW, et al. Large language models encode clinical knowledge. Nature 2023;620:172-180 
  67. Ghesu FC, Georgescu B, Mansoor A, Yoo Y, Neumann D, Patel P, et al. Self-supervised learning from 100 million medical images. arXiv [Preprint]. 2022 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2201.01283 
  68. Cho K, Kim KD, Nam Y, Jeong J, Kim J, Choi C, et al. CheSS: chest X-ray pre-trained model via self-supervised contrastive learning. J Digit Imaging 2023;36:902-910 
  69. Wu C, Zhang X, Zhang Y, Wang Y, Xie W. Towards generalist foundation model for radiology by leveraging web-scale 2D&3D medical data. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2308.02463 
  70. Kang E, Koo HJ, Yang DH, Seo JB, Ye JC. Cycle-consistent adversarial denoising network for multiphase coronary CT angiography. Med Phys 2019;46:550-562 
  71. Wolterink JM, Leiner T, Viergever MA, Isgum I. Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans Med Imaging 2017;36:2536-2545 
  72. Wang J, Zhao Y, Noble JH, Dawant BM. Conditional generative adversarial networks for metal artifact reduction in CT images of the ear. Proceedings of 21st International Conference on Medical Image Computing and Computer-Assisted Intervention-MICCAI 2018; 2018 Sep 16-20; Granada, Spain; Cham: Springer International Publishing, 2018:3-11 
  73. Liang X, Chen L, Nguyen D, Zhou Z, Gu X, Yang M, et al. Generating synthesized computed tomography (CT) from cone-beam computed tomography (CBCT) using CycleGAN for adaptive radiation therapy. Phys Med Biol 2019;64:125002 
  74. Harms J, Lei Y, Wang T, Zhang R, Zhou J, Tang X, et al. Paired cycle-GAN-based image correction for quantitative cone-beam computed tomography. Med Phys 2019;46:3998-4009 
  75. Kim KH, Do WJ, Park SH. Improving resolution of MR images with an adversarial network incorporating images with different contrast. Med Phys 2018;45:3120-3131 
  76. Quan TM, Nguyen-Duc T, Jeong WK. Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss. IEEE Trans Med Imaging 2018;37:1488-1497 
  77. Yang G, Yu S, Dong H, Slabaugh G, Dragotti PL, Ye X, et al. DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans Med Imaging 2018;37:1310-1321 
  78. Nie D, Trullo R, Lian J, Wang L, Petitjean C, Ruan S, et al. Medical image synthesis with deep convolutional adversarial networks. IEEE Trans Biomed Eng 2018;65:2720-2730 
  79. Yao Z, Luo T, Dong Y, Jia X, Deng Y, Wu G, et al. Virtual elastography ultrasound via generative adversarial network for breast cancer diagnosis. Nat Commun 2023;14:788 
  80. Maspero M, Savenije MHF, Dinkla AM, Seevinck PR, Intven MPW, Jurgenliemk-Schulz IM, et al. Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy. Phys Med Biol 2018;63:185001 
  81. Lei Y, Harms J, Wang T, Liu Y, Shu HK, Jani AB, et al. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med Phys 2019;46:3565-3581 
  82. Conte GM, Weston AD, Vogelsang DC, Philbrick KA, Cai JC, Barbera M, et al. Generative adversarial networks to synthesize missing T1 and FLAIR MRI sequences for use in a multisequence brain tumor segmentation model. Radiology 2021;299:313-323 
  83. Lei Y, Dong X, Tian Z, Liu Y, Tian S, Wang T, et al. CT prostate segmentation based on synthetic MRI-aided deep attention fully convolution network. Med Phys 2020;47:530-540 
  84. Chung M, Kong ST, Park B, Chung Y, Jung KH, Seo JB. Utilizing synthetic nodules for improving nodule detection in chest radiographs. J Digit Imaging 2022;35:1061-1068 
  85. Al Khalil Y, Amirrajab S, Lorenz C, Weese J, Pluim J, Breeuwer M. On the usability of synthetic data for improving the robustness of deep learning-based segmentation of cardiac magnetic resonance images. Med Image Anal 2023;84:102688 
  86. Jayachandran Preetha C, Meredig H, Brugnara G, Mahmutoglu MA, Foltyn M, Isensee F, et al. Deep-learning-based synthesis of post-contrast T1-weighted MRI for tumour response assessment in neuro-oncology: a multicentre, retrospective cohort study. Lancet Digit Health 2021;3:e784-e794 
  87. Sandfort V, Yan K, Pickhardt PJ, Summers RM. Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Sci Rep 2019;9:16884 
  88. Park JE, Vollmuth P, Kim N, Kim HS. Research highlight: use of generative images created with artificial intelligence for brain tumor imaging. Korean J Radiol 2022;23:500-504 
  89. Bae K, Oh DY, Yun ID, Jeon KN. Bone suppression on chest radiographs for pulmonary nodule detection: comparison between a generative adversarial network and dual-energy subtraction. Korean J Radiol 2022;23:139-149 
  90. Tanner C, Ozdemir F, Profanter R, Vishnevsky V, Konukoglu E, Goksel O. Generative adversarial networks for MR-CT deformable image registration. arXiv [Preprint]. 2018 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.1807.07349 
  91. Yan P, Xu S, Rastinehad AR, Wood BJ. Adversarial image registration with application for MR and TRUS image fusion. Proceedings of 9th International Workshop on Machine Learning in Medical Imaging-MLMI 2018; 2018 Sep 16; Granada, Spain; Cham: Springer International Publishing, 2018:197-204 
  92. Wolleb J, Sandkuhler R, Cattin PC. DeScarGAN: disease-specific anomaly detection with weak supervision. Proceedings of 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention-MICCAI 2020; 2020 Oct 4-8; Lima, Peru; Cham: Springer International Publishing, 2020:14-24 
  93. Nakao T, Hanaoka S, Nomura Y, Murata M, Takenaga T, Miki S, et al. Unsupervised deep anomaly detection in chest radiographs. J Digit Imaging 2021;34:418-427 
  94. Lee S, Jeong B, Kim M, Jang R, Paik W, Kang J, et al. Emergency triage of brain computed tomography via anomaly detection with a deep generative model. Nat Commun 2022;13:4251 
  95. van Hespen KM, Zwanenburg JJM, Dankbaar JW, Geerlings MI, Hendrikse J, Kuijf HJ. An anomaly detection approach to identify chronic brain infarcts on MRI. Sci Rep 2021;11:7714 
  96. Khosla M, Jamison K, Kuceyeski A, Sabuncu MR. Detecting abnormalities in resting-state dynamics: an unsupervised learning approach. Proceedings of 10th International Workshop on Machine Learning in Medical Imaging; 2019 Oct 13; Shenzhen, China; Cham: Springer International Publishing, 2019:301-309 
  97. Han C, Rundo L, Murao K, Noguchi T, Shimahara Y, Milacski ZA, et al. MADGAN: unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction. BMC Bioinformatics 2021;22(Suppl 2):31 
  98. Bowles C, Gunn R, Hammers A, Rueckert D. Modelling the progression of Alzheimer's disease in MRI using generative adversarial networks [accessed on August 28, 2023]. Available at: https://doi.org/10.1117/12.2293256 
  99. Liu X, Xie Y, Cheng J, Diao S, Tan S, Liang X. Diffusion probabilistic priors for zero-shot low-dose CT image denoising. arXiv [Preprint]. 2023 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2305.15887 
  100. Gao Q, Li Z, Zhang J, Zhang Y, Shan H. CoreDiff: contextual error-modulated generalized diffusion model for low-dose CT denoising and generalization. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.1109/TMI.2023.3320812 
  101. Li Q, Li C, Yan C, Li X, Li H, Zhang T, et al. Ultra-low dose CT image denoising based on conditional denoising diffusion probabilistic model [accessed on August 28, 2023]. Available at: https://doi.org/10.1109/CyberC55534.2022.00041 
  102. Gao Q, Shan H. CoCoDiff: a contextual conditional diffusion model for low-dose CT image denoising [accessed on August 28, 2023]. Available at: https://doi.org/10.1117/12.2634939 
  103. Selim M, Zhang J, Brooks MA, Wang G, Chen J. DiffusionCT: latent diffusion model for CT image standardization. arXiv [Preprint]. 2023 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2301.08815 
  104. Chung H, Ye JC. Score-based diffusion models for accelerated MRI. Med Image Anal 2022;80:102479 
  105. Xia W, Lyu Q, Wang G. Low-dose CT using denoising diffusion probabilistic model for 20x speedup. arXiv [Preprint]. 2022 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2209.15136 
  106. Huang J, Aviles-Rivero AI, Schonlieb CB, Yang G. CDiffMR: can we replace the gaussian noise with K-space undersampling for fast MRI? Proceedings of 26th International Conference on Medical Image Computing and Computer-Assisted Intervention-MICCAI 2023; 2023 Oct 8-12; Vancouver, Canada; Cham: Springer International Publishing, 2023:3-12 
  107. Pan S, Abouei E, Wynne J, Wang T, Qiu RL, Li Y, et al. Synthetic CT generation from MRI using 3D transformer-based denoising diffusion model. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2305.19467 
  108. Lyu Q, Wang G. Conversion between CT and MRI images using diffusion and score-matching models. arXiv [Preprint]. 2022 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2209.12104 
  109. Ozbey M, Dalmaz O, Dar SU, Bedel HA, S, O, Gungor A, et al. Unsupervised medical image translation with adversarial diffusion models [accessed on August 28, 2023]. Available at: https://doi.org/10.1109/TMI.2023.3290149 
  110. Wu J, Fu R, Fang H, Zhang Y, Yang Y, Xiong H, et al. MedSegDiff: medical image segmentation with diffusion probabilistic model. arXiv [Preprint]. 2022 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2211.00611 
  111. Wu J, Fu R, Fang H, Zhang Y, Xu Y. MedSegDiff-V2: diffusion based medical image segmentation with transformer. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2301.11798 
  112. Wolleb J, Sandkuhler R, Bieder F, Valmaggia P, Cattin PC. Diffusion models for implicit image segmentation ensembles [accessed on August 28, 2023]. Available at: https://proceedings.mlr.press/v172/wolleb22a.html 
  113. Rahman A, Valanarasu JMJ, Hacihaliloglu I, Patel VM. Ambiguous medical image segmentation using diffusion models [accessed on August 28, 2023]. Available at: https://openaccess.thecvf.com/content/CVPR2023/html/Rahman_Ambiguous_Medical_Image_Segmentation_Using_Diffusion_Models_CVPR_2023_paper.html 
  114. Kim B, Han I, Ye JC. DiffuseMorph: unsupervised deformable image registration using diffusion model. Proceedings of 17th European Conference on Computer Vision-ECCV 2022; 2022 Oct 23-27; Tel Aviv, Israel; Cham: Springer International Publishing, 2022:347-364 
  115. Fontanella A, Mair G, Wardlaw J, Trucco E, Storkey A. Diffusion models for counterfactual generation and anomaly detection in brain images. arXiv [Preprint]. 2023 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2308.02062 
  116. Wolleb J, Bieder F, Sandkuhler R, Cattin PC. Diffusion models for medical anomaly detection. Proceedings of 25th International Conference on Medical Image Computing and Computer-Assisted Intervention-MICCAI 2022; 2022 Sep 18-22; Singapore; Cham: Springer International Publishing, 2022:35-45 
  117. Li J, Cao H, Wang J, Liu F, Dou Q, Chen G, et al. Fast non-markovian diffusion model for weakly supervised anomaly detection in brain MR images. Proceedings of 26th International Conference on Medical Image Computing and Computer-Assisted Intervention-MICCAI 2023; 2023 Oct 8-12; Vancouver, Canada; Cham: Springer International Publishing, 2023:579-589 
  118. Pinaya WH, Graham MS, Gray R, Da Costa PF, Tudosiu PD, Wright P, et al. Fast unsupervised brain anomaly detection and segmentation with diffusion models. Proceedings of 25th International Conference on Medical Image Computing and Computer-Assisted Intervention-MICCAI 2022; 2022 Sep 18-22; Singapore; Cham: Springer International Publishing, 2022:705-714 
  119. Behrendt F, Bhattacharya D, Kruger J, Opfer R, Schlaefer A. Patched diffusion models for unsupervised anomaly detection in brain MRI. arXiv [Preprint]. 2023 [accessed on August 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2303.03758 
  120. Xia W, Zhang Y, Yang Y, Xue JH, Zhou B, Yang MH. GAN inversion: a survey [accessed on August 28, 2023]. Available at: https://doi.org/10.1109/TPAMI.2022.3181070 
  121. Mokady R, Hertz A, Aberman K, Pritch Y, Cohen-Or D. Null-text inversion for editing real images using guided diffusion models [accessed on August 28, 2023]. Available at: https://openaccess.thecvf.com/content/CVPR2023/html/Mokady_NULL-Text_Inversion_for_Editing_Real_Images_Using_Guided_Diffusion_Models_CVPR_2023_paper.html 
  122. Zhu J, Shen Y, Zhao D, Zhou B. In-domain GAN inversion for real image editing. In: Vedaldi A, Bischof H, Brox T, Frahm JM. Computer vision - ECCV 2020. Cham: Springer, 2020:592-608 
  123. Wang T, Zhang Y, Fan Y, Wang J, Chen Q. High-fidelity GAN inversion for image attribute editing [accessed on August 28, 2023]. Available at: https://openaccess.thecvf.com/content/CVPR2022/papers/Wang_High-Fidelity_GAN_Inversion_for_Image_Attribute_Editing_CVPR_2022_paper.pdf 
  124. Ren Z, Yu SX, Whitney D. Controllable medical image generation via GAN. J Percept Imaging 2022;5:000502-1-000502-15 
  125. Fetty L, Bylund M, Kuess P, Heilemann G, Nyholm T, Georg D, et al. Latent space manipulation for high-resolution medical image synthesis via the StyleGAN. Z Med Phys 2020;30:305-314 
  126. Lee JS, Shin K, Ryu SM, Jegal SG, Lee W, Yoon MA, et al. Screening of adolescent idiopathic scoliosis using generative adversarial network (GAN) inversion method in chest radiographs. PLoS One 2023;18:e0285489 
  127. Marcel S, Millan Jdel R. Person authentication using brainwaves (EEG) and maximum a posteriori model adaptation. IEEE Trans Pattern Anal Mach Intell 2007;29:743-752 
  128. Topol EJ. What's lurking in your electrocardiogram? Lancet 2021;397:785 
  129. Arora A. Synthetic data: the future of open-access health-care datasets? Lancet 2023;401:997 
  130. Elreedy D, Atiya AF. A comprehensive analysis of synthetic minority oversampling technique (SMOTE) for handling class imbalance. Inf Sci 2019;505:32-64 
  131. van Breugel B, Kyono T, Berrevoets J, van der Schaar M. DECAF: generating fair synthetic data using causally-aware generative networks [accessed on November 28, 2023]. Available at: https://proceedings.neurips.cc/paper/2021/hash/ba9fab001f67381e56e410575874d967-Abstract.html 
  132. Rajotte JF, Bergen R, Buckeridge DL, El Emam K, Ng R, Strome E. Synthetic data as an enabler for machine learning applications in medicine. iScience 2022;25:105331 
  133. Banerjee R, Midha S, Kelkar AH, Goodman A, Prasad V, Mohyuddin GR. Synthetic control arms in studies of multiple myeloma and diffuse large B-cell lymphoma. Br J Haematol 2022;196:1274-1277 
  134. Thorlund K, Dron L, Park JJH, Mills EJ. Synthetic and external controls in clinical trials - a primer for researchers. Clin Epidemiol 2020;12:457-467 
  135. Thanh-Tung H, Tran T. Catastrophic forgetting and mode collapse in GANs [accessed on November 28, 2023]. Available at: https://doi.org/10.1109/IJCNN48605.2020.9207181 
  136. Durall R, Chatzimichailidis A, Labus P, Keuper J. Combating mode collapse in GAN training: an empirical analysis using hessian eigenvalues. arXiv [Preprint]. 2020 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2012.09673 
  137. Bau D, Zhu JY, Wulff J, Peebles W, Strobelt H, Zhou B, et al. Seeing what a GAN cannot generate [accessed on November 28, 2023]. Available at: https://openaccess.thecvf.com/content_ICCV_2019/html/Bau_Seeing_What_a_GAN_Cannot_Generate_ICCV_2019_paper.html 
  138. Odena A, Dumoulin V, Olah C. Deconvolution and checkerboard artifacts. Distill 2016;1:e3 
  139. Yin Y, Huang L, Liu Y, Huang K. DiffGAR: model-agnostic restoration from generative artifacts using image-to-image diffusion models [accessed on November 28, 2023]. Available at: https://dl.acm.org/doi/abs/10.1145/3577530.3577539 
  140. Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, et al. Survey of hallucination in natural language generation. ACM Comput Surv 2023;55:1-38 
  141. Mundler N, He J, Jenko S, Vechev M. Self-contradictory hallucinations of large language models: evaluation, detection and mitigation. arXiv [Preprint]. 2023 [accessed on November 28, 2023]. Available at: https://doi.org/10.48550/arXiv.2305.15852 
  142. Koga S. The integration of large language models such as ChatGPT in scientific writing: harnessing potential and addressing pitfalls. Korean J Radiol 2023;24:924-925 
  143. Hwang SI, Lim JS, Lee RW, Matsui Y, Iguchi T, Hiraki T, et al. Is ChatGPT a "fire of prometheus" for non-native english-speaking researchers in academic writing? Korean J Radiol 2023;24:952-959 
  144. Park SH. Use of generative artificial intelligence, including large language models such as ChatGPT, in scientific publications: policies of KJR and prominent authorities. Korean J Radiol 2023;24:715-718