DOI QR코드

DOI QR Code

Deep Learning-Enabled Detection of Pneumoperitoneum in Supine and Erect Abdominal Radiography: Modeling Using Transfer Learning and Semi-Supervised Learning

  • Sangjoon Park (Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology) ;
  • Jong Chul Ye (Kim Jaechul Graduate School of AI, Korea Advanced Institute of Science and Technology) ;
  • Eun Sun Lee (Department of Radiology, Chung-Ang University Hospital, Chung-Ang University College of Medicine) ;
  • Gyeongme Cho (Department of Radiology, Chung-Ang University Hospital, Chung-Ang University College of Medicine) ;
  • Jin Woo Yoon (Department of Radiology, Chung-Ang University Hospital, Chung-Ang University College of Medicine) ;
  • Joo Hyeok Choi (Department of Radiology, Chung-Ang University Hospital, Chung-Ang University College of Medicine) ;
  • Ijin Joo (Department of Radiology, Seoul National University Hospital) ;
  • Yoon Jin Lee (Department of Radiology, Seoul National University Bundang Hospital)
  • Received : 2022.07.25
  • Accepted : 2023.04.11
  • Published : 2023.06.01

Abstract

Objective: Detection of pneumoperitoneum using abdominal radiography, particularly in the supine position, is often challenging. This study aimed to develop and externally validate a deep learning model for the detection of pneumoperitoneum using supine and erect abdominal radiography. Materials and Methods: A model that can utilize "pneumoperitoneum" and "non-pneumoperitoneum" classes was developed through knowledge distillation. To train the proposed model with limited training data and weak labels, it was trained using a recently proposed semi-supervised learning method called distillation for self-supervised and self-train learning (DISTL), which leverages the Vision Transformer. The proposed model was first pre-trained with chest radiographs to utilize common knowledge between modalities, fine-tuned, and self-trained on labeled and unlabeled abdominal radiographs. The proposed model was trained using data from supine and erect abdominal radiographs. In total, 191212 chest radiographs (CheXpert data) were used for pre-training, and 5518 labeled and 16671 unlabeled abdominal radiographs were used for fine-tuning and self-supervised learning, respectively. The proposed model was internally validated on 389 abdominal radiographs and externally validated on 475 and 798 abdominal radiographs from the two institutions. We evaluated the performance in diagnosing pneumoperitoneum using the area under the receiver operating characteristic curve (AUC) and compared it with that of radiologists. Results: In the internal validation, the proposed model had an AUC, sensitivity, and specificity of 0.881, 85.4%, and 73.3% and 0.968, 91.1, and 95.0 for supine and erect positions, respectively. In the external validation at the two institutions, the AUCs were 0.835 and 0.852 for the supine position and 0.909 and 0.944 for the erect position. In the reader study, the readers' performances improved with the assistance of the proposed model. Conclusion: The proposed model trained with the DISTL method can accurately detect pneumoperitoneum on abdominal radiography in both the supine and erect positions.

Keywords

Acknowledgement

This study was supported by research grant from Biomedical Research Institute, Chung-Ang University Hospital (2022).

References

  1. Chan SY, Kirsch CM, Jensen W, Sherck J. Tension pneumoperitoneum. West J Med 1996;165(1-2):61-64 
  2. Tolstrup MB, Watt SK, Gogenur I. Morbidity and mortality rates after emergency abdominal surgery: an analysis of 4346 patients scheduled for emergency laparotomy or laparoscopy. Langenbecks Arch Surg 2017;402:615-623 
  3. Tau N, Cohen I, Barash Y, Klang E. Free abdominal gas on computed tomography in the emergency department: aetiologies and association between amount of free gas and mortality. Ann R Coll Surg Engl 2020;102:581-589 
  4. Chen SC, Yen ZS, Wang HP, Lin FY, Hsu CY, Chen WJ. Ultrasonography is superior to plain radiography in the diagnosis of pneumoperitoneum. Br J Surg 2002;89:351-354 
  5. Stapakis JC, Thickman D. Diagnosis of pneumoperitoneum: abdominal CT vs. upright chest film. J Comput Assist Tomogr 1992;16:713-716 
  6. Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med 2019;380:1347-1358 
  7. De Fauw J, Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med 2018;24:1342-1350 
  8. Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Annu Rev Biomed Eng 2017;19:221-248 
  9. Roberts M, Driggs D, Thorpe M, Gilbey J, Yeung M, Ursprung S, et al. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat Mach Intell 2021;3:199-217 
  10. Gou J, Yu B, Maybank SJ, Tao D. Knowledge distillation: a survey. Int J Comput Vis 2021;129:1789-1819 
  11. de Cea MVS, Gruen D, Richmond D. Pneumoperitoneum detection in chest X-ray by a deep learning ensemble with model explainability. In: Institute of Electrical and Electronics Engineers (IEEE). 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI); 2021 April 13-16; Nice, France. IEEE; 2021. p. 1637-1641 
  12. Goyal M, Austin-Strohbehn J, Sun SJ, Rodriguez K, Sin JM, Cheung YY, et al. Sensitivity and specificity evaluation of deep learning models for detection of pneumoperitoneum on chest radiographs. In: Tucker A, Henriques Abreu P, Cardoso J, Pereira Rodrigues P, Riano D, eds. 19th International Conference on Artificial Intelligence in Medicine; 2021 June 15-18; Portugal. Porto: Springer; 2021. p. 307-317 
  13. Kim M, Kim JS, Lee C, Kang BK. Detection of pneumoperitoneum in the abdominal radiograph images using artificial neural networks. Eur J Radiol Open 2020;8:100316 
  14. Su CY, Tsai TY, Tseng CY, Liu KH, Lee CW. A deep learning method for alerting emergency physicians about the presence of subphrenic free air on chest radiographs. J Clin Med 2021;10:254 
  15. Park S, Kim G, Oh Y, Seo JB, Lee SM, Kim JH, et al. Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation. Nat Commun 2022;13:3848 
  16. Irvin J, Rajpurkar P, Ko M, Yu Y, Ciurea-Ilcus S, Chute C, et al. CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In: AAAI Press. The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19); 2019 January 27-February 1; Honolulu, USA. Palo Alto: PKP Publishing Services Network; 2019. p. 590-597 
  17. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, et al. An image is worth 16x16 words: transformers for image recognition at scale. arXiv:2010.11929 [Preprint]. [posted October 22, 2020; revised June 3, 2021; cited January 1, 2022]. https://arxiv.org/abs/2010.11929 
  18. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Agapito L, Berg T, Jana Kosecka, Zelnik-Manor L (Program Chair). 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 June 26-July 1; Las Vegas, USA. Las Vegas: IEEE; 2016. p. 770-778 
  19. DeLong ER, DeLong DM, Clarke-Pearson DL. Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach. Biometrics 1988;44:837-845 
  20. Cheplygina V, de Bruijne M, Pluim JPW. Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med Image Anal 2019;54:280-296 
  21. Raza K, Singh NK. A tour of unsupervised deep learning for medical image analysis. Curr Med Imaging 2021;17:1059-1077 
  22. Chen L, Bentley P, Mori K, Misawa K, Fujiwara M, Rueckert D. Self-supervised learning for medical image analysis using image context restoration. Med Image Anal 2019;58:101539 
  23. Liu Q, Yu L, Luo L, Dou Q, Heng PA. Semi-supervised medical image classification with relation-driven self-ensembling model. IEEE Trans Med Imaging 2020;39:3429-3440 
  24. Xie Q, Luong MT, Hovy E, Le QV. Self-training with noisy student improves imagenet classification. In: Liu C, Mori G, Saenko K, Savarese S (Program Chair). 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 June 14-19; 2020. Seattle: IEEE; p. 10687-10698 
  25. Caron M, Touvron H, Misra I, Jegou H, Mairal J, Bojanowski P, et al. Emerging properties in self-supervised vision transformers. In: Hassner T, Pal C, Sato Y, Damen D (Program Chair). 2021 IEEE/CVF International Conference on Computer Vision (ICCV); 2021 October 11-17; Montreal, Canada. Montreal: IEEE; 2021. p. 9630-9640 
  26. Zhou J, Wei C, Wang H, Shen W, Xie C, Yuille A, et al. iBOT: image BERT pre-training with online tokenizer. arXiv:2111.07832 [Preprint]. [posted November 15, 2021; revised January 27, 2022; cited February 1, 2022]. https://arxiv.org/abs/2111.07832 
  27. Kim G. Recent deep semi-supervised learning approaches and related works. arXiv:2106.11528 [Preprint]. [posted June 22, 2021; revised July 5, 2022; cited January 20, 2022]. https://arxiv.org/abs/2106.11528 
  28. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJ. Artificial intelligence in radiology. Nat Rev Cancer 2018;18:500-510 
  29. Gayer G, Hertz M, Zissin R. Postoperative pneumoperitoneum: prevalence, duration, and possible significance. Semin Ultrasound CT MR 2004;25:286-289