DOI QR코드

DOI QR Code

A Radiomics-based Unread Cervical Imaging Classification Algorithm

자궁경부 영상에서의 라디오믹스 기반 판독 불가 영상 분류 알고리즘 연구

  • Kim, Go Eun (Department of Biomedical Engineering, Gachon University) ;
  • Kim, Young Jae (Department of Biomedical Engineering, Gachon University) ;
  • Ju, Woong (Department of Obstetrics & Gynecology, Ewha Womans University Seoul Hospital) ;
  • Nam, Kyehyun (Department of Obstetrics & Gynecology, Soonchunhyang University, Bucheon Hospital) ;
  • Kim, Soonyung (R&D Center, NTL Medical Institute) ;
  • Kim, Kwang Gi (Department of Biomedical Engineering, Gachon University)
  • 김고은 (가천대학교 의용생체공학과) ;
  • 김영재 (가천대학교 의용생체공학과) ;
  • 주웅 (이화여자대학교 산부인과) ;
  • 남계현 (순천향대학교 산부인과) ;
  • 김수녕 (엔티엘의료재단 R&D센터) ;
  • 김광기 (가천대학교 의용생체공학과)
  • Received : 2021.10.27
  • Accepted : 2021.10.29
  • Published : 2021.10.31

Abstract

Recently, artificial intelligence for diagnosis system of obstetric diseases have been actively studied. Artificial intelligence diagnostic assist systems, which support medical diagnosis benefits of efficiency and accuracy, may experience problems of poor learning accuracy and reliability when inappropriate images are the model's input data. For this reason, before learning, We proposed an algorithm to exclude unread cervical imaging. 2,000 images of read cervical imaging and 257 images of unread cervical imaging were used for this study. Experiments were conducted based on the statistical method Radiomics to extract feature values of the entire images for classification of unread images from the entire images and to obtain a range of read threshold values. The degree to which brightness, blur, and cervical regions were photographed adequately in the image was determined as classification indicators. We compared the classification performance by learning read cervical imaging classified by the algorithm proposed in this paper and unread cervical imaging for deep learning classification model. We evaluate the classification accuracy for unread Cervical imaging of the algorithm by comparing the performance. Images for the algorithm showed higher accuracy of 91.6% on average. It is expected that the algorithm proposed in this paper will improve reliability by effectively excluding unread cervical imaging and ultimately reducing errors in artificial intelligence diagnosis.

Keywords

Acknowledgement

이 연구는 중소벤처기업부에서 지원하는 기술 개발프로그램(S2797147)과 가천대 길병원(FRD2019-11-02(3))의 지원을 받아 수행되었습니다.

References

  1. Hyun-Ju Choi, Tae-Yun Kim, Patrik Malm, Ewert Bengtsson, Heung-Kook Choi. Study on evaluating the significance of 3D nuclear texture features for diagnosis of cervical cancer. Korean Society of Computer Information. 2011;16(10):83-86.
  2. http://www.amc.seoul.kr/asan/healthinfo/disease/diseaseDetail.do?contentId=31818, accessed on 2014.
  3. https://www.cancer.go.kr/lay1/program/S1T211C223/cancer/view.do?cancer_seq=4877, accessed on Nov. 17, 2019.
  4. Alyafeai Z, Ghouti L. A fully-automated deep learning pipeline for cervical cancer classification. Expert Systems with Applications. 2020;83-86
  5. Chandran V, Sumithra MG, Karthick A, George T, Deivakani M, Elakkiya B, Subramaniam U, Manoharan S. Diagnosis of Cervical Cancer based on Ensemble Deep Learning Network using Colposcopy Images. BioMed Research International. 2021;83-86
  6. Yu Jin Seol, Young Jae Kim, Kye Hyun Nam, Kwang Gi Kim. Comparison on the Deep Learning Performance of a Field of View. Journal of Korea Multimedia Society. 2020;23(7):812-818. https://doi.org/10.9717/KMMS.2020.23.7.812
  7. Fei Jiang, Yong Jiang, Hui Zhi, Yi Dong, Hao Li, Sufeng Ma, Yilong Wang, Qiang Dong, Haipeng Shen, Yongjun Wang. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. https://doi.org/10.1136/svn-2017-000101
  8. Youngtak Cho, Kiok Ahn. Design and Implementation of Automated Detection System of Personal Identification Information for Surgical Video De-Identification. Journal of convergence security. 2019;19(5):75-84.
  9. Korean society of Obstetrics and Gynecology (https://www.ksog.org/public/index.php?sub=4&third=4).
  10. http://www.samsunghospital.com/home/healthInfo/content/contentList.do?CONT_CLS_CD=001020001&ST=DIS&TAB=DIS_NM&SW=%EC%9E%90%EA%B6%81%EA%B2%BD%EB%B6%80%EC%95%94, accessed on 2015.
  11. Song E-H, Kim J-J, Myung N-H, Park C-H. Spiral Brush in PapSure Test for Cervical Cancer Screening. Korean Journal of Gynecologic Oncology and Colposcopy. 2002;13(4):313-326. https://doi.org/10.3802/kjgoc.2002.13.4.313
  12. Hee HS, Chern YJ, Classify And Visualize The Fatty Liver Using Class Activation Maps And CNN, Korea Institute Of Communication Sciences, 2020;18(10):83-86.
  13. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba. Learning Deep Features for Discriminative Localization. Computer Vision and Pattern Recognition. 2016;2921-2929.
  14. https://acsjournals.onlinelibrary.wiley.com/doi/full/10.3322/caac.21628, accessed on Jul. 30, 2020.
  15. Sangchul Kim, J. Nang. An Analysis of Luminance Histogram and Correlation of Motion Vector for Unsuitable Frames for Frame Rate Up Conversion. Korean Institute of Information Scientists and Engineers. 2016;22(10):532-536.
  16. Raghav Bansal, Gaurav Raj, Tanupriya Choudhury. Blur image detection using Laplacian operator and Open-CV. IEEE. 2017.
  17. Wang J, Song Y, Leung T, Rosenberg C, Wang J, Philbin J, Chen B, Wu Y. Learning Fine-grained Image Similarity with Deep Ranking. IEEE. 2014:1,386-1,393
  18. Kim MK. Feature Extraction on a Periocular Region and Person Authentication Using a ResNet Model. Journal of Korea Multimedia Society. 2019;22(12):1,347-1,355.
  19. Kang H-J. Efficient Fixed-Point Representation for ResNet50 Convolutional Neural Network. Journal of the Korea Institute of Information and Communication Engineering. 2018;22(1):1-8. https://doi.org/10.6109/JKIICE.2018.22.1.1
  20. Jeon S-S, Son K-Y, Lee J-H, Oh J-S, Son S-H. A Comparison Analysis on the Sales Price of Apartments according to G-SEED by Using T-test. Journal of the Autumn Academic Presentation Conference. 2019;19(2):207-208.