Browse > Article
http://dx.doi.org/10.17946/JRST.2022.45.1.41

Application of Deep Learning-Based Nuclear Medicine Lung Study Classification Model  

Jeong, Eui-Hwan (KISMITS Co.,Ltd.)
Oh, Joo-Young (KISMITS Co.,Ltd.)
Lee, Ju-Young (Department of Radiological Technology, Songho College)
Park, Hoon-Hee (Department of Radiological Technology, Shingu College)
Publication Information
Journal of radiological science and technology / v.45, no.1, 2022 , pp. 41-47 More about this Journal
Abstract
The purpose of this study is to apply a deep learning model that can distinguish lung perfusion and lung ventilation images in nuclear medicine, and to evaluate the image classification ability. Image data pre-processing was performed in the following order: image matrix size adjustment, min-max normalization, image center position adjustment, train/validation/test data set classification, and data augmentation. The convolutional neural network(CNN) structures of VGG-16, ResNet-18, Inception-ResNet-v2, and SE-ResNeXt-101 were used. For classification model evaluation, performance evaluation index of classification model, class activation map(CAM), and statistical image evaluation method were applied. As for the performance evaluation index of the classification model, SE-ResNeXt-101 and Inception-ResNet-v2 showed the highest performance with the same results. As a result of CAM, cardiac and right lung regions were highly activated in lung perfusion, and upper lung and neck regions were highly activated in lung ventilation. Statistical image evaluation showed a meaningful difference between SE-ResNeXt-101 and Inception-ResNet-v2. As a result of the study, the applicability of the CNN model for lung scintigraphy classification was confirmed. In the future, it is expected that it will be used as basic data for research on new artificial intelligence models and will help stable image management in clinical practice.
Keywords
Convolutional neural network; Deep learning; Lung scintigraphy; Class activation map; Nuclear medicine;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 Danish N, Khurram Azeem H, Alain P, Marcus L, Didier S, Muhammad Zeshan A. HybridTabNet: Towards Better Table Detection in Scanned Document Images. applied sciences [Internet]. 2021 [published 2021 Sep 11]; 11(18):8396. Available from: https://doi.org/10.3390/app11188396   DOI
2 Kicky G. van L, Maarten de R, Steven S, Bram van G, Matthieu JCMR. How does artificial intelligence in radiology improve efficiency and health outcomes? Pediatr Radiology [Internet]. 2021 [published 2021 Jun 12]. Available from: https://doi.org/10.1007/s00247-021-05114-8   DOI
3 Ibon M, Jon A, Anthony R, Basilio S. 3D Convolutional Neural Networks Initialized from Pretrained 2D Convolutional Neural Networks for Classification of Industrial Parts. Sensors [Internet]. 2021 [published 2021 Feb 4]; 21(4):1078. Available from: https://doi.org/10.3390/s21041078   DOI
4 Kaiming H, Xiangyu Z, Shaoqing R, Jian S. Deep Residual Learning for Image Recognition. IEEE Conference on Computer Vision and Pattern Recognition(CVPR). 2016;1:770-8.
5 Chen-Hua C, Chi-Lun W, Hung-Wen C. Automatic classification of medical image modality and anatomical location using convolutional neural network. PLoS ONE [Internet]. 2021 [published 2021 Jun 11]; 16(6):e0253205. Available from: https://doi.org/10.1371/journal.pone.0253205   DOI
6 Alexander Selvikvag L, Arvid L. An overview of deep learning in medical imaging focusing on MRI. Zeitschrift fur Medizinische Physik. 2019 May; 29(2):102-27.   DOI
7 David M. Artificial Intelligence in the Industry 4.0, and Its Impact on Poverty, Innovation, Infrastructure Development, and the Sustainable Development Goals: Lessons from Emerging Economies? Sustainability. 2021;13(11):5788.   DOI
8 Humphreys D, Kupresanin A, Boyer MD, Canik J, Chang CS, Cyr EC, et al. Advancing Fusion with Machine Learning Research Needs Workshop Report. Journal of Fusion Energy. 2020;39:123-55.   DOI
9 Christian J, Patrick Z, Kai H. Machine learning and deep learning. Electronic Markets. 2021;31:685-95.   DOI
10 Keisuke K, Sho F, Kenji H, Chietsugu K, Osamu M, Kentaro K, et al. A convolutional neural network-based system to classify patients using FDG PET/CT examinations. BMC Cancer [Internet]. 2020 [published 2020 Mar 17]; 20(227):1-10. Available from: https://doi.org/10.1186/s12885-020-6694-x   DOI
11 Hasan HK, Rahmita OKR, Dimon MZ. Components and implementation of a picture archiving and communication system in a prototype application. Reports in Medical Imaging. 2019;12:1-8.
12 Ajda S, Damjana H, Jure F, Marko G. Lung scintigraphy in the diagnosis of pulmonary embolism: Current methods and interpretation criteria in clinical practice. Radiol Oncol. 2014;48(2):113-9.   DOI
13 Dulari B, Chirag P, Hardik T, Jigar P, Rasmika V, Sharnil P, et al. CNN Variants for Computer Vision: History, Architecture, Application, Challenges and Future Scope. Electronics [Internet]. 2021 [published 2021 Oct 11]; 10(20):2470. Available from: https://doi.org/10.3390/electronics10202470   DOI
14 Karen S, Andrew Z. Very Deep Convolutional Networks for Large-Scale Image Recognition. International Conference on Learning Representations(ICLR) [Internet]. 2015 [last revised 2015 Apr 10]; arXiv:1049.1556v6. Available from: https://arxiv.org/abs/1409.1556v6
15 Zhong-Qiu Z, Peng Z, Shou-tao X, Xindong W. Object Detection with Deep Learning: A Review. IEEE Transactions on Neural Networks and Learning Systems. 2019 Jan;30(11):3212-32.   DOI