DOI QR코드

DOI QR Code

scCycleGAN 기반 MR-CT 상호 변환 모델의 구축

Development of a Model for MR-CT Bi-directional Conversion based on scCycleGAN

  • 정다움 (을지대학교 보건과학대학 방사선학과) ;
  • 박승진 (을지대학교 보건과학대학 방사선학과) ;
  • 신승연 (을지대학교 보건과학대학 방사선학과) ;
  • 이용아 (을지대학교 보건과학대학 방사선학과) ;
  • 장성빈 (을지대학교 보건과학대학 방사선학과) ;
  • 임종천 (한양대학교병원 영상의학과) ;
  • 홍주완 (을지대학교 보건과학대학 방사선학과) ;
  • 한동균 (을지대학교 보건과학대학 방사선학과)
  • Da-Um Jeong (Department of Radiological Science, College of Health Sciences, Eulji University) ;
  • Seung-Jin Park (Department of Radiological Science, College of Health Sciences, Eulji University) ;
  • Seung-Yeon Shin (Department of Radiological Science, College of Health Sciences, Eulji University) ;
  • Yong-Ah Lee (Department of Radiological Science, College of Health Sciences, Eulji University) ;
  • Seong-Bin Jang (Department of Radiological Science, College of Health Sciences, Eulji University) ;
  • Jong-Cheon Lim (Department of Diagnostic Radiology, Hanyang University Hospital) ;
  • Joo-Wan Hong (Department of Radiological Science, College of Health Sciences, Eulji University) ;
  • Dong-Kyoon Han (Department of Radiological Science, College of Health Sciences, Eulji University)
  • 투고 : 2024.10.19
  • 심사 : 2024.11.30
  • 발행 : 2024.11.30

초록

구조 제약형 주기 일관성 적대적 생성 신경망(Structure-constraints Cycle Genarative Adversarial Neural Networks, scCycleGANs)을 기반으로 MR-CT 간 상호 변환 모델을 구축하고자 하였다. 하드웨어 장비로 MDCT(Somatom Definition Flash CT, SIEMENS, Germany) 및 3.0T MRI(Ingenia 3.0T CX MRI, PHILIPS, Netherlands)와 소프트웨어로 Python(3.12.6), PyTorch(2.4.0)를 사용하였다. 연구 모델로는 scCycleGAN을 채택하였다. 87명의 환자의 두부 CT 및 MR(T2WI) 영상을 각각 2,871장, 2,436장 획득하였으며 총 5,307장의 의료 영상에 대해 동일 높이에서 촬영된 CT 및 MR 영상을 일차적 평가를 통해 분류해 각각 364쌍, 27쌍, 8쌍의 영상을 학습, 검증, 테스트 데이터로 라벨링 하였다. 이후 기본적인 APS frameworks 기반의 GAN 모델에 Hybrid objective function을 적용하여 모델을 구축하였으며 생성한 모델에 대한 평가를 정량적 평가와 정성적 평가로 나누어 진행하였다. 정성적 평가는 20년 이상의 경력을 가진 10인의 방사선사를 대상으로 진행하였으며 정량적 평가는 PSNR, IOU, SSIM, MAE 지표로 설정하였다. 정성평가 결과 '보통'이상의 응답으로 정의된 '긍정 응답' 비율은 합성 CT 및 MR 그룹에 대해 각 63%, 96%로 산출되었으며, 정량 평가 지표인 PSNR, SSIM, MAE에 대해 두 그룹 모두 초기의 목표 수치를 달성하였다. 우리의 연구는 의료 영상 간의 변환 및 합성 분야에 대한 기초 연구 자료로 사용될 수 있을 것이며 나아가 후속 연구 및 보완 연구를 통해 모델 경량화 등의 문제를 해결하여 임상 환경에 적용한다면 환자의 피폭 선량 부담 및 의료비 부담을 경감할 수 있을 것으로 기대된다.

We aimed to build an MR-CT interconversion model based on structure-constraints Cycle-constraints Generative Adversarial Neural Networks (scCycleGANs). We used MDCT (Somatom Definition Flash CT, SIEMENS, Germany) and 3.0T MRI (Ingenia 3.0T CX MRI, PHILIPS, Netherlands) as our hardware equipment and Python (3.12.6) and PyTorch (2.4.0) as software. The study model was scCycleGAN. We acquired 2,871 and 2,436 brain CT and MR (T2WI) images of 87 patients, respectively, and for a total of 5,307 medical images, CT and MR images taken at the same level were classified through primary evaluation, and 364, 27, and 8 pairs of images were labeled as training, validation, and test data, respectively. Then, we applied hybrid objective function to the GAN model based on the basic APS frameworks to build the model, and the evaluation of the generated model was divided into quantitative and qualitative evaluation. The qualitative evaluation was conducted on 10 radiologists with more than 20 years of experience, and the quantitative evaluation was set as PSNR, IOU, SSIM, and MAE. The results of the qualitative evaluation showed that the percentage of 'positive responses', defined as a response of 'Neutral' or better, was 63% and 96% for the Synthesis CT (sCT) and Synthesis MR (sMR) groups, respectively, while the quantitative evaluation metrics PSNR, SSIM, and MAE achieved the initial target values for both groups. Our study can be used as basic guided research in the field of medical image conversion and synthesis. And further research and complementary studies are expected to solve problems such as model lightweighting to reduce the dose burden on patients and medical costs if applied to clinical environments.

키워드

과제정보

이 연구는 2024년 을지대학교 대학혁신지원사업 지원을 받아 진행한 연구임.

참고문헌

  1. Y. Jang, J. Yoo, H. Hong, "Assessment and Analysis of Fidelity and Diversity for GAN-based Medical Image Generative Model", Journal of the Korea Computer Graphics Society, Vol. 28, No. 2, pp. 11-19, 2022. http://dx.doi.org/10.15701/kcgs.2022.28.2.11 
  2. Y. J. Cho, K. M. Bae, J. Y. Park, "Research Trends of Generative Adversarial Networks and Image Generation and Translation", Electronics and Telecommunications Trends, Vol. 35, No. 4, pp. 91-102, 2020. https://dx.doi.org/10.22648/ETRI.2020.J.350409 
  3. J. Y. Ko, B. H. Cho, M. J. Chung, "GAN-based research for high-resolution medical image generation", Proceedings of the Korea information Processing Society Conference, Vol. 27, No. 1, pp. 544-546,2020. https://dx.doi.org/10.3745/PKIPS.y2020m05a.544 
  4. J. Y. Zhu, T. Park, P. Isola, A. A. Efros, "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks", 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2242-2251, October 2017. https://doi.org/10.1109/ICCV.2017.244 
  5. J. W. Lee, S. Y. Lee, D. H. Yoo, "Trends in Computed Tomography (CT) Technology", Electronics and Telecommunications Trends, Vol. 25, No. 4, pp. 60-68, 2010. https://doi.org/10.22648/ETRI.2010.J.250407 
  6. H. R. Jang, H. O. Song, J. S. Kim, "Evaluation of Noise Characteristics and Influence of MRI Operation", Journal of the Korean Society of Living Environmental System, Vol. 25, No. 2, pp. 183-193, 2018. https://doi.org/10.21086/ksles.2018.04.25.2.183 
  7. Korea Disease Control and Prevention Agency KDCA, "2023 National Medical Radiation Evaluation Yearbook", Publication Registration Number: 11-1790387-001056-01, 2023.
  8. National Health Insurance Service NHIS, "Nomore worries about expensive medical bills. Starting in November, abdominal and chest MRI tests will be covered by health insurance", NHIS, Vol. 254, 2021 
  9. J. Y. Zhu, T. Park, P. Isola, A. A. Efros, "Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks", 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 2242-2251, 2017. https://doi.org/10.1109/ICCV.2017.244 
  10. J. M. Wolterink, A. M. Dinkla, M. H. F. Savenije, P. R. Seevinck, C. A. T. van den Berg, I. Isgum, "Deep MR to CT Synthesis Using Unpaired Data", Lecture Notes in Computer Science, Vol. 10557, Simulation and Synthesis in Medical Imaging, Vol. 10557, pp. 2-10, 2017. https://doi.org/10.1007/978-3-319-68127-6_2
  11. Y. Lei, J. Harms, T. Wang, Y. Liu, H. K. Shu, A. B. Jani, W. J. Curran, H. Mao, T. Liu, X. Yang, "MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks", Medical Physics, Vol. 46, No. 8, pp. 3565-3581, 2019. http://dx.doi.org/10.1002/mp.13617 
  12. Y. Liu, A. Chen, H. Shi, S. Huang, W. Zheng, Z. Liu, Q. Zhang, X. Yang, "CT synthesis from MRI using multi-cycle GAN for head-and-neck radiation therapy", Computerized Medical Imaging and Graphics, Vol. 91, 2021. http://dx.doi.org/10.1016/j.compmedimag.2021.101953 
  13. H. Yang, J. Sun, A. Carass, C. Zhao, J. H. Lee, Z. Xu, J. Prince, "Unpaired Brain MR-to-CT Synthesis Using a Structure-Constrained CycleGAN", Lecture Notes in Computer Science, Vol. 11045, pp. 174-182, 2018.
  14. S. Durr, Y. Mroueh, Y. Tu, S. Wang, "Effective Dynamics of Generative Adversarial Networks", Physical review. X, Vol. 13, No. 041004, 2023. http://dx.doi.org/10.1103/PhysRevX.13.041004 
  15. SuperAnnotate, "Intersection over Union (IoU) for object detection", SuperAnnotate, July 20, 2023.
  16. M. Krithika alias Anbu Devi, K. Suganthi, "Review of Medical Image Synthesis using GAN Techniques", ITM Web Conference, Vol. 37, 2021. https://doi.org/10.1051/itmconf/20213701005 
  17. W. Li, Y. Li, W. Qin, X. Liang, J. Xu, J. Xiong, Y. Xie, "Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy", Quantitative imaging in medicine and surgery, Vol. 10, No. 6, pp. 1223-1236, 2020. http://dx.doi.org/10.21037/qims-19-885 
  18. Y. Skandarani, P. M. Jodoin, A. Lalande, "GANs for Medical Image Synthesis: An Empirical Study", Journal of Imaging, Vol. 9, No. 3, pp. 69, 2023. https://doi.org/10.3390/jimaging9030069 
  19. C. Han, H. Hayashi, L. Rundo, R. Araki, W. Shimoda, S. Muramatsu, Y. Furukawa, G. Mauri, H. Nakayama, "GAN-based synthetic brain MR image generation", IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) pp. 734-738, 2018. https://doi.org/10.1109/ISBI.2018.8363678