DOI QR코드

DOI QR Code

Image-Based Generative Artificial Intelligence in Radiology: Comprehensive Updates

  • Ha Kyung Jung (Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center) ;
  • Kiduk Kim (Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center) ;
  • Ji Eun Park (Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center) ;
  • Namkug Kim (Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center)
  • Received : 2024.04.19
  • Accepted : 2024.08.29
  • Published : 2024.11.01

Abstract

Generative artificial intelligence (AI) has been applied to images for image quality enhancement, domain transfer, and augmentation of training data for AI modeling in various medical fields. Image-generative AI can produce large amounts of unannotated imaging data, which facilitates multiple downstream deep-learning tasks. However, their evaluation methods and clinical utility have not been thoroughly reviewed. This article summarizes commonly used generative adversarial networks and diffusion models. In addition, it summarizes their utility in clinical tasks in the field of radiology, such as direct image utilization, lesion detection, segmentation, and diagnosis. This article aims to guide readers regarding radiology practice and research using image-generative AI by 1) reviewing basic theories of image-generative AI, 2) discussing the methods used to evaluate the generated images, 3) outlining the clinical and research utility of generated images, and 4) discussing the issue of hallucinations.

Keywords

Acknowledgement

This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (grant number: RS-2023-00305153) and a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (grant number: HI22C1723 and HR20C0026).

References

  1. Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA. Generative adversarial networks: an overview. IEEE Signal Process Mag 2018;35:53-65 https://doi.org/10.1109/MSP.2017.2765202
  2. Kim K, Cho K, Jang R, Kyung S, Lee S, Ham S, et al. Updated primer on generative artificial intelligence and large language models in medical imaging for medical professionals. Korean J Radiol 2024;25:224-242
  3. You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. Eye Vis (Lond) 2022;9:6
  4. Skandarani Y, Lalande A, Afilalo J, Jodoin PM. Generative adversarial networks in cardiology. Can J Cardiol 2022;38:196-203 https://doi.org/10.1016/j.cjca.2021.11.003
  5. Qin Z, Liu Z, Zhu P, Xue Y. A GAN-based image synthesis method for skin lesion classification. Comput Methods Programs Biomed 2020;195:105568
  6. Jung KH. Uncover this tech term: foundation model. Korean J Radiol 2023;24:1038-1041 https://doi.org/10.3348/kjr.2023.0790
  7. Pai S, Bontempi D, Hadzic I, Prudente V, Sokac M, Chaunzwa TL, et al. Foundation model for cancer imaging biomarkers. Nat Mach Intell 2024;6:354-367
  8. Higgins I, Matthey L, Pal A, Burgess CP, Glorot X, Botvinick MM, et al. beta-VAE: learning basic visual concepts with a constrained variational framework [accessed on March 10, 2024]. Available at: https://api.semanticscholar.org/CorpusID:46798026
  9. Kingma DP. Auto-encoding variational bayes. arXiv [Preprint]. 2013 [accessed on March 10, 2024]. Available at: https://doi.org/10.48550/arXiv.1312.6114
  10. Van Den Oord A, Vinyals O. Neural discrete representation learning [accessed on March 10, 2024]. Available at: https://proceedings.neurips.cc/paper/2017/hash/7a98af17e63a0ac09ce2e96d03992fbc-Abstract.html
  11. Van den Oord A, Kalchbrenner N, Espeholt L, Vinyals O, Graves A. Conditional image generation with PixelCNN decoders [accessed on March 10, 2024]. Available at: https://proceedings.neurips.cc/paper/2016/hash/b1301141feffabac455e1f90a7de2054-Abstract.html
  12. Van den Oord A, Kalchbrenner N, Kavukcuoglu K. Pixel recurrent neural networks [accessed on March 10, 2024]. Available at: https://proceedings.mlr.press/v48/oord16.html
  13. Salimans T, Karpathy A, Chen X, Kingma DP. PixelCNN++: improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv [Preprint]. 2017 [accessed on March 10, 2024]. Available at: https://doi.org/10.48550/arXiv.1701.05517
  14. Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models [accessed on March 10, 2024]. Available at: https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf
  15. Nichol AQ, Dhariwal P. Improved denoising diffusion probabilistic models [accessed on March 10, 2024]. Available at: https://proceedings.mlr.press/v139/nichol21a.html
  16. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets [accessed on March 10, 2024]. Available at: https://proceedings.neurips.cc/paper_files/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html
  17. Karras T, Aila T, Laine S, Lehtinen J. Progressive growing of GANs for improved quality, stability, and variation. arXiv [Preprint]. 2017 [accessed on March 10, 2024]. Available at: https://doi.org/10.48550/arXiv.1710.10196
  18. Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks [accessed on March 10, 2024]. Available at: https://openaccess.thecvf.com/content_CVPR_2019/papers/Karras_A_Style-Based_Generator_Architecture_for_Generative_Adversarial_Networks_CVPR_2019_paper.pdf
  19. Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T. Analyzing and improving the image quality of StyleGAN [accessed on March 10, 2024]. Available at: https://openaccess.thecvf.com/content_CVPR_2020/papers/Karras_Analyzing_and_Improving_the_Image_Quality_of_StyleGAN_CVPR_2020_paper.pdf
  20. Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. High-resolution image synthesis with latent diffusion models [accessed on March 10, 2024]. Available at: https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf
  21. Song J, Meng C, Ermon S. Denoising diffusion implicit models. arXiv [Preprint]. 2020 [accessed on March 10, 2024]. Available at: https://doi.org/10.48550/arXiv.2010.02502
  22. Song Y, Sohl-Dickstein J, Kingma DP, Kumar A, Ermon S, Poole B. Score-based generative modeling through stochastic differential equations. arXiv [Preprint]. 2020 [accessed on March 10, 2024]. Available at: https://doi.org/10.48550/arXiv.2011.13456
  23. Horvat C, Pfister JP. Denoising normalizing flow [accessed on March 10, 2024]. Available at: https://proceedings.neurips.cc/paper/2021/hash/4c07fe24771249c343e70c32289c1192-Abstract.html
  24. Papamakarios G, Nalisnick E, Rezende DJ, Mohamed S, Lakshminarayanan B. Normalizing flows for probabilistic modeling and inference. J Mach Learn Res 2021;22:1-64
  25. Rezende D, Mohamed S. Variational inference with normalizing flows [accessed on March 10, 2024]. Available at: https://proceedings.mlr.press/v37/rezende15.pdf
  26. Zhang Q, Chen Y. Diffusion normalizing flow [accessed on March 10, 2024]. Available at: https://proceedings.neurips.cc/paper/2021/file/876f1f9954de0aa402d91bb988d12cd4-Paper.pdf
  27. Du Y, Li S, Tenenbaum J, Mordatch I. Learning iterative reasoning through energy minimization [accessed on March 10, 2024]. Available at: https://proceedings.mlr.press/v162/du22d/du22d.pdf
  28. Liu N, Li S, Du Y, Tenenbaum JB, Torralba A. Learning to compose visual relations [accessed on March 10, 2024]. Available at: https://dl.acm.org/doi/10.5555/3540261.3542035
  29. Xie J, Lu Y, Zhu SC, Wu Y. A theory of generative ConvNet [accessed on March 10, 2024]. Available at: https://proceedings.mlr.press/v48/xiec16.html
  30. Xie J, Zhu SC, Wu YN. Learning energy-based spatial-temporal generative convnets for dynamic patterns. IEEE Trans Pattern Anal Mach Intell 2021;43:516-531 https://doi.org/10.1109/TPAMI.2019.2934852
  31. Dhariwal P, Nichol A. Diffusion models beat GANs on image synthesis [accessed on March 10, 2024]. Available at: https://proceedings.nips.cc/paper/2021/file/49ad23d1ec9fa4bd8d77d02681df5cfa-Paper.pdf
  32. Metz L, Poole B, Pfau D, Sohl-Dickstein J. Unrolled generative adversarial networks. arXiv [Preprint]. 2016 [accessed on March 10, 2024]. Available at: https://doi.org/10.48550/arXiv.1611.02163
  33. Thanh-Tung H, Tran T. Catastrophic forgetting and mode collapse in GANs [accessed on March 10, 2024]. Available at: https://doi.org/10.1109/IJCNN48605.2020.9207181
  34. Karras T, Aittala M, Hellsten J, Laine S, Lehtinen J, Aila T. Training generative adversarial networks with limited data [accessed on March 10, 2024]. Available at: https://papers.nips.cc/paper/2020/file/8d30aa96e72440759f74bd2306c1fa3d-Paper.pdf
  35. Wang Z, Zheng H, He P, Chen W, Zhou M. Diffusion-GAN: training GANs with diffusion. arXiv [Preprint]. 2022 [accessed on March 10, 2024]. Available at: https://doi.org/10.48550/arXiv.2206.02262
  36. Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, et al. Overcoming the challenges in the development and implementation of artificial intelligence in radiology: a comprehensive review of solutions beyond supervised learning. Korean J Radiol 2023;24:1061-1080
  37. Moon JH, Lee H, Shin W, Kim YH, Choi E. Multi-modal understanding and generation for medical images and text via vision-language pre-training. IEEE J Biomed Health Inform 2022;26:6070-6080 https://doi.org/10.1109/JBHI.2022.3207502
  38. Tumanyan N, Geyer M, Bagon S, Dekel T. Plug-and-play diffusion features for text-driven image-to-image translation [accessed on April 2, 2024]. Available at: https://openaccess.thecvf.com/content/CVPR2023/html/Tumanyan_Plug-and-Play_Diffusion_Features_for_Text-Driven_Image-to-Image_Translation_CVPR_2023_paper.html
  39. Lee H, Kang M, Han B. Conditional score guidance for text-driven image-to-image translation [accessed on March 10, 2024]. Available at: https://dl.acm.org/doi/10.5555/3666122.3667801
  40. Yang Q, Li N, Zhao Z, Fan X, Chang EI, Xu Y. MRI cross-modality image-to-image translation. Sci Rep 2020;10:3753
  41. Wang Z, Yang Y, Chen Y, Yuan T, Sermesant M, Delingette H, et al. Mutual information guided diffusion for zero-shot cross-modality medical image translation. IEEE Trans Med Imaging 2024;43:2825-2838
  42. Wang K, Chen Z, Zhu M, Li Z, Weng J, Gu T. Score-based counterfactual generation for interpretable medical image classification and lesion localization. IEEE Trans Med Imaging 2024 [Epub]. https://doi.org/10.1109/TMI.2024.3375357
  43. Lee S, Jeong B, Kim M, Jang R, Paik W, Kang J, et al. Emergency triage of brain computed tomography via anomaly detection with a deep generative model. Nat Commun 2022;13:4251
  44. Geman D, Geman S, Hallonquist N, Younes L. Visual turing test for computer vision systems [accessed on March 14, 2024]. Available at: https://doi.org/10.1073/pnas.1422953112
  45. Borji A. Pros and cons of GAN evaluation measures. Comput Vis Image Understand 2019;179:41-65 https://doi.org/10.1016/j.cviu.2018.10.009
  46. Borji A. Pros and cons of GAN evaluation measures: new developments. Comput Vis Image Understand 2022;215:103329
  47. Huynh-Thu Q, Ghanbari M. Scope of validity of PSNR in image/video quality assessment. Electron Lett 2008;44:800-801 https://doi.org/10.1049/el:20080522
  48. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004;13:600-612 https://doi.org/10.1109/TIP.2003.819861
  49. Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X. Improved techniques for training GANs [accessed on March 14, 2024]. Available at: https://proceedings.neurips.cc/paper_files/paper/2016/hash/8a3363abe792db2d8761d6403605aeb7-Abstract.html
  50. Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S. GANs trained by a two time-scale update rule converge to a local nash equilibrium [accessed on March 14, 2024]. Available at: https://proceedings.neurips.cc/paper/2017/hash/8a1d694707eb0fefe65871369074926d-Abstract.html
  51. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision [accessed on March 14, 2024]. Available at: https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.html
  52. Kullback S, Leibler RA. On information and sufficiency. Ann Math Stat 1951;22:79-86
  53. Sajjadi MS, Bachem O, Lucic M, Bousquet O, Gelly S. Assessing generative models via precision and recall [accessed on March 14, 2024]. Available at: https://dl.acm.org/doi/10.5555/3327345.3327429
  54. Sculley D. Web-scale k-means clustering [accessed on March 14, 2024]. Available at: https://doi.org/10.1145/1772690.1772862
  55. Naeem MF, Oh SJ, Uh Y, Choi Y, Yoo J. Reliable fidelity and diversity metrics for generative models [accessed on March 14, 2024]. Available at: https://proceedings.mlr.press/v119/naeem20a.html
  56. Park SH, Han K, Jang HY, Park JE, Lee JG, Kim DW, et al. Methods for clinical evaluation of artificial intelligence algorithms for medical diagnosis. Radiology 2023;306:20-31 https://doi.org/10.1148/radiol.220182
  57. Faghani S, Khosravi B, Zhang K, Moassefi M, Jagtap JM, Nugen F, et al. Mitigating bias in radiology machine learning: 3. Performance metrics. Radiol Artif Intell 2022;4:e220061
  58. Erickson BJ, Kitamura F. Magician's corner: 9. Performance metrics for machine learning models. Radiol Artif Intell 2021;3:e200126
  59. Bae K, Oh DY, Yun ID, Jeon KN. Bone suppression on chest radiographs for pulmonary nodule detection: comparison between a generative adversarial network and dual-energy subtraction. Korean J Radiol 2022;23:139-149 https://doi.org/10.3348/kjr.2021.0146
  60. Chung H, Ye JC. Score-based diffusion models for accelerated MRI. Med Image Anal 2022;80:102479
  61. Conte GM, Weston AD, Vogelsang DC, Philbrick KA, Cai JC, Barbera M, et al. Generative adversarial networks to synthesize missing T1 and FLAIR MRI sequences for use in a multisequence brain tumor segmentation model. Radiology 2021;299:313-323 https://doi.org/10.1148/radiol.2021203786
  62. Emami H, Dong M, Nejad-Davarani SP, Glide-Hurst CK. Generating synthetic CTs from magnetic resonance images using generative adversarial networks. Med Phys 2018;45:3627-3636 https://doi.org/10.1002/mp.13047
  63. Hwang HJ, Kim H, Seo JB, Ye JC, Oh G, Lee SM, et al. Generative adversarial network-based image conversion among different computed tomography protocols and vendors: effects on accuracy and variability in quantifying regional disease patterns of interstitial lung disease. Korean J Radiol 2023;24:807-820
  64. Kustner T, Munoz C, Psenicny A, Bustin A, Fuin N, Qi H, et al. Deep-learning based super-resolution for 3D isotropic coronary MR angiography in less than a minute. Magn Reson Med 2021;86:2837-2852 https://doi.org/10.1002/mrm.28911
  65. Lee SB, Cho YJ, Hong Y, Jeong D, Lee J, Kim SH, et al. Deep learning-based image conversion improves the reproducibility of computed tomography radiomics features: a phantom study. Invest Radiol 2022;57:308-317 https://doi.org/10.1097/RLI.0000000000000839
  66. Lin W, Lin W, Chen G, Zhang H, Gao Q, Huang Y, et al. Bidirectional mapping of brain MRI and PET with 3D reversible GAN for the diagnosis of Alzheimer's disease. Front Neurosci 2021;15:646013
  67. Lyu J, Fu Y, Yang M, Xiong Y, Duan Q, Duan C, et al. Generative adversarial network-based noncontrast CT angiography for aorta and carotid arteries. Radiology 2023;309:e230681
  68. Marcadent S, Hofmeister J, Preti MG, Martin SP, Van De Ville D, Montet X. Generative adversarial networks improve the reproducibility and discriminative power of radiomic features. Radiol Artif Intell 2020;2:e190035
  69. Ozbey M, Dalmaz O, Dar SUH, Bedel HA, Ozturk S, Gungor A, et al. Unsupervised medical image translation with adversarial diffusion models. IEEE Trans Med Imaging 2023;42:3524-3539 https://doi.org/10.1109/TMI.2023.3290149
  70. Preetha CJ, Meredig H, Brugnara G, Mahmutoglu MA, Foltyn M, Isensee F, et al. Deep-learning-based synthesis of post-contrast T1-weighted MRI for tumour response assessment in neuro-oncology: a multicentre, retrospective cohort study. Lancet Digit Health 2021;3:e784-e794
  71. Schlaeger S, Li HB, Baum T, Zimmer C, Moosbauer J, Byas S, et al. Longitudinal assessment of multiple sclerosis lesion load with synthetic magnetic resonance imaging-a multicenter validation study. Invest Radiol 2023;58:320-326
  72. Wicaksono KP, Fujimoto K, Fushimi Y, Sakata A, Okuchi S, Hinoda T, et al. Super-resolution application of generative adversarial network on brain time-of-flight MR angiography: image quality and diagnostic utility evaluation. Eur Radiol 2023;33:936-946
  73. Xia W, Niu C, Cong W, Wang G. Cube-based 3D denoising diffusion probabilistic model for cone beam computed tomography reconstruction with incomplete data. arXiv [Preprint]. 2023 [accessed on March 20, 2024]. Available at: https://arxiv.org/abs/2303.12861v1
  74. Xiao Y, Chen C, Wang L, Yu J, Fu X, Zou Y, et al. A novel hybrid generative adversarial network for CT and MRI super-resolution reconstruction. Phys Med Biol 2023;68:135007
  75. Xie T, Cao C, Cui Z, Li F, Wei Z, Zhu Y, et al. Brain PET synthesis from MRI using joint probability distribution of diffusion model at ultrahigh fields. arXiv [Preprint]. 2022 [accessed on March 17, 2024]. Available at: https://doi.org/10.48550/arXiv.2211.08901
  76. Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks [accessed on March 17, 2024]. Available at: https://doi.org/10.48550/arXiv.1611.07004
  77. Cui ZX, Cao C, Liu S, Zhu Q, Cheng J, Wang H, et al. Self-score: self-supervised learning on score-based models for MRI reconstruction. arXiv [Preprint]. 2022 [accessed on March 16, 2024]. Available at: https://doi.org/10.48550/arXiv.2209.00835
  78. Choe J, Lee SM, Do KH, Lee G, Lee JG, Lee SM, et al. Deep learning-based image conversion of CT reconstruction kernels improves radiomics reproducibility for pulmonary nodules or masses. Radiology 2019;292:365-373 https://doi.org/10.1148/radiol.2019181960
  79. Kim H, Park CM, Lee M, Park SJ, Song YS, Lee JH, et al. Impact of reconstruction algorithms on CT radiomic features of pulmonary tumors: analysis of intra- and inter-reader variability and inter-reconstruction algorithm variability. PLoS One 2016;11:e0164924
  80. Mackin D, Fave X, Zhang L, Fried D, Yang J, Taylor B, et al. Measuring computed tomography scanner variability of radiomics features. Invest Radiol 2015;50:757-765 https://doi.org/10.1097/RLI.0000000000000180
  81. Meyer M, Ronald J, Vernuccio F, Nelson RC, Ramirez-Giraldo JC, Solomon J, et al. Reproducibility of CT radiomic features within the same patient: influence of radiation dose and CT reconstruction settings. Radiology 2019;293:583-591 https://doi.org/10.1148/radiol.2019190928
  82. Shafiq-Ul-Hassan M, Zhang GG, Latifi K, Ullah G, Hunt DC, Balagurunathan Y, et al. Intrinsic dependencies of CT radiomic features on voxel size and number of gray levels. Med Phys 2017;44:1050-1062
  83. Sandfort V, Yan K, Pickhardt PJ, Summers RM. Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks. Sci Rep 2019;9:16884
  84. Rawte V, Sheth A, Das A. A survey of hallucination in large foundation models. arXiv [Preprint]. 2023 [accessed on March 20, 2024]. Available at: https://doi.org/10.48550/arXiv.2309.05922
  85. Wolterink JM, Mukhopadhyay A, Leiner T, Vogl TJ, Bucher AM, Isgum I. Generative adversarial networks: a primer for radiologists. Radiographics 2021;41:840-857 https://doi.org/10.1148/rg.2021200151
  86. Choi J, Kim S, Jeong Y, Gwon Y, Yoon S. ILVR: conditioning method for denoising diffusion probabilistic models. arXiv [Preprint]. 2021 [accessed on April 1, 2024]. Available at: https://doi.org/10.48550/arXiv.2108.02938
  87. Zhu J, Shen Y, Zhao D, Zhou B. In-domain GAN inversion for real image editing. In: Vedaldi A, Bischof H, Brox T, Frahm JM, eds. Computer vision-ECCV 2020. Cham: Springer, 2020:592-608
  88. Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks [accessed on March 26, 2024]. Available at: https://openaccess.thecvf.com/content_ICCV_2017/papers/Zhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.pdf
  89. Shrivastav A. Generative AI hallucinations: revealing best techniques to minimize hallucinations [accessed on April 9, 2024]. Available at: https://www.kellton.com/kellton-tech-blog/generative-ai-hallucinations-revealing-best-techniques
  90. Bercea CI, Neumayr M, Rueckert D, Schnabel JA. Mask, stitch, and re-sample: enhancing robustness and generalizability in anomaly detection through automatic diffusion models. arXiv [Preprint]. 2023 [accessed on March 20, 2024]. Available at: https://doi.org/10.48550/arXiv.2305.19643
  91. Han C, Murao K, Noguchi T, Kawata Y, Uchiyama F, Rundo L, et al. Learning more with less: conditional PGGAN-based data augmentation for brain metastases detection using highly-rough annotation on MR images [accessed on March 20, 2024]. Available at: https://dl.acm.org/doi/10.1145/3357384.3357890
  92. Jin D, Xu Z, Tang Y, Harrison AP, Mollura DJ. CT-realistic lung nodule simulation from 3D conditional generative adversarial networks for robust lung segmentation. In: Frangi A, Schnabel J, Davatzikos C, Alberola-Lopez C, Fichtinger G, eds. Medical image computing and computer assisted intervention-MICCAI 2018. Cham: Springer, 2018:732-740
  93. Moon HH, Jeong J, Park JE, Kim N, Choi C, Kim YH, et al. Generative AI in glioma: ensuring diversity in training image phenotypes to improve diagnostic performance for IDH mutation prediction. Neuro Oncol 2024;26:1124-1135 https://doi.org/10.1093/neuonc/noae012
  94. Park JE, Eun D, Kim HS, Lee DH, Jang RW, Kim N. Generative adversarial network for glioblastoma ensures morphologic variations and improves diagnostic model for isocitrate dehydrogenase mutant type. Sci Rep 2021;11:9912
  95. Rosnati M, Roschewitz M, Glocker B. Robust semi-supervised segmentation with timestep ensembling diffusion models [accessed on March 20, 2024]. Available at: https://proceedings.mlr.press/v225/rosnati23a/rosnati23a.pdf
  96. Wolleb J, Bieder F, Sandkuhler R, Cattin PC. Diffusion models for medical anomaly detection. In: Wang L, Dou Q, Fletcher PT, Speidel S, Li S, eds. Medical image computing and computer assisted intervention-MICCAI 2022. Cham: Springer, 2022:35-45
  97. Chen D, Han Y, Duncan J, Jia L, Shan J. Generative artificial intelligence enhancements for reducing image-based training data requirements. Ophthalmol Sci 2024;4:100531