DOI QR코드

DOI QR Code

Explainable & Safe Artificial Intelligence in Radiology

의료 영상 분석을 위한 설명 가능하고 안전한 인공지능

  • Synho Do (Laboratory of Medical Imaging and Computation, Department of Radiology, Massachusetts General Hospital and Harvard Medical School)
  • 도신호 (하버드 의과대학 매사추세츠종합병원 영상의학과)
  • Received : 2024.09.08
  • Accepted : 2024.09.24
  • Published : 2024.10.01

Abstract

Artificial intelligence (AI) is transforming radiology with improved diagnostic accuracy and efficiency, but prediction uncertainty remains a critical challenge. This review examines key sources of uncertainty-out-of-distribution, aleatoric, and model uncertainties-and highlights the importance of independent confidence metrics and explainable AI for safe integration. Independent confidence metrics assess the reliability of AI predictions, while explainable AI provides transparency, enhancing collaboration between AI and radiologists. The development of zero-error tolerance models, designed to minimize errors, sets new standards for safety. Addressing these challenges will enable AI to become a trusted partner in radiology, advancing care standards and patient outcomes.

인공지능(artificial intelligence; 이하 AI)은 진단 정확도와 효율성을 높여 영상의학 분야에 변화를 가져오고 있지만, 예측 불확실성은 여전히 중요한 과제로 남아 있다. 본 리뷰에서는 주요 불확실성의 원인인 분포 외(out-of-distribution) 불확실성, 데이터 내재적 불확실성(aleatoric uncertainty), 모델 불확실성을 다루며, 안전한 AI 통합을 위해 독립적인 신뢰성 지표와 설명 가능한 AI의 중요성을 강조한다. 독립적인 신뢰성 지표는 AI 예측의 신뢰성을 평가하는 데 기여하며, 설명 가능한 AI는 투명성을 제공하여 AI와 영상의학 전문의 간의 협업을 강화한다. 오류 무관용(zero error tolerance) 모델의 개발은 오류를 최소화하도록 설계되어, 안전성의 새로운 기준을 제시하였다. 이러한 문제를 해결함으로써 AI는 영상의학에서 신뢰할 수 있는 동반자로 자리 잡아, 환자 진료 수준과 결과를 개선하는 데 기여할 것이다.

Keywords

References

  1. Chua M, Kim D, Choi J, Lee NG, Deshpande V, Schwab J, et al. Tackling prediction uncertainty in machine learning for healthcare. Nat Biomed Eng 2023;7:711-718 
  2. Candemir S, Nguyen XV, Folio LR, Prevedello LM. Training strategies for radiology deep learning models in data-limited scenarios. Radiol Artif Intell 2021;3:e210014 
  3. Lambert B, Forbes F, Doyle S, Tucholka A, Dojat M. Improving uncertainty-based out-of-distribution detection for medical image segmentation. arXiv [Preprint]. Available at. https://doi.org/10.48550/arXiv.2211.05421. Accessed September 25, 2024 
  4. Onder O, Yarasir Y, Azizova A, Durhan G, Onur MR, Ariyurek OM. Errors, discrepancies and underlying bias in radiology with case examples: a pictorial review. Insights Imaging 2021;12:51 
  5. Sambyal AS, Krishnan NC, Bathula DR. Towards reducing aleatoric uncertainty for medical imaging tasks. Available at. https://doi.org/10.1109/ISBI52829.2022.9761638. Published 2022. Accessed September 25, 2024 
  6. Monteiro M, Le Folgoc L, Coelho de Castro D, Pawlowski N, Marques B, Kamnitsas K, et al. Stochastic segmentation networks: modelling spatially correlated aleatoric uncertainty. Available at. https://proceedings.neurips.cc/paper/2020/hash/95f8d9901ca8878e291552f001f67692-Abstract.html. Published 2020. Accessed September 25, 2024 
  7. Wang G, Li W, Aertsen M, Deprest J, Ourselin S, Vercauteren T. Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks. Neurocomputing (Amst) 2019;335:34-45 
  8. Chung J, Kim D, Choi J, Yune S, Song KD, Kim S, et al. Prediction of oxygen requirement in patients with COVID-19 using a pre-trained chest radiograph xAI model: efficient development of auditable risk prediction models via a fine-tuning approach. Sci Rep 2022;12:21164 
  9. Frenay B, Verleysen M. Classification in the presence of label noise: a survey. IEEE Trans Neural Netw Learn Syst 2014;25:845-869 
  10. Rolnick D, Veit A, Belongie S, Shavit N. Deep learning is robust to massive label noise. arXiv [Preprint]. Available at. https://doi.org/10.48550/arXiv.1705.10694. Published 2017. Accessed September 25, 2024 
  11. Jang R, Kim N, Jang M, Lee KH, Lee SM, Lee KH, et al. Assessment of the robustness of convolutional neural networks in labeling noise by using chest X-ray images from multiple centers. JMIR Med Inform 2020;8:e18089 
  12. Ju L, Wang X, Wang L, Mahapatra D, Zhao X, Zhou Q, et al. Improving medical images classification with label noise using dual-uncertainty estimation. IEEE Trans Med Imaging 2022;41:1533-1546 
  13. Guidotti R, Monreale A, Ruggieri S, Turini F, Giannotti F, Pedreschi D. A survey of methods for explaining black box models. ACM Comput Surv 2018;51:1-42 
  14. Castelvecchi D. Can we open the black box of AI? Nature 2016;538:20-23 
  15. Duran JM, Jongsma KR. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J Med Ethics 2021;47:329-335 
  16. Yang G, Ye Q, Xia J. Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: a mini-review, two showcases and beyond. Inf Fusion 2022;77:29-52 
  17. Kim D, Chung J, Choi J, Succi MD, Conklin J, Longo MGF, et al. Accurate auto-labeling of chest X-ray images based on quantitative similarity to an explainable AI model. Nat Commun 2022;13:1867 
  18. Lee H, Yune S, Mansouri M, Kim M, Tajmir SH, Guerrier CE, et al. An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets. Nat Biomed Eng 2019;3:173-182 
  19. Yoon BC, Pomerantz SR, Mercaldo ND, Goyal S, L'Italien EM, Lev MH, et al. Incorporating algorithmic uncertainty into a clinical machine deep learning algorithm for urgent head CTs. PLoS One 2023;18:e0281900