Interobserver Variability in the Interpretation of Microcalcifications in Digital Magnification Mammographies

디지털확대유방촬영술에서 미세석회화 분석에 대한 판독자간 일치도

  • Jeon, Su-Jin (Department of Radiology, National Health Insurance Corporation Ilsan Hospital) ;
  • Kim, Min-Jung (Department of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine) ;
  • Kim, Eun-Kyung (Department of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine) ;
  • Son, Eun-Ju (Department of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine) ;
  • Youk, Ji-Hyun (Department of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine) ;
  • Kwak, Jin-Young (Department of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine) ;
  • Choi, Seon-Hyeong (Department of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine)
  • 전수진 (국민건강보험공단 일산병원 영상의학과) ;
  • 김민정 (연세대학교 의과대학 영상의학과) ;
  • 김은경 (연세대학교 의과대학 영상의학과) ;
  • 손은주 (연세대학교 의과대학 영상의학과) ;
  • 육지현 (연세대학교 의과대학 영상의학과) ;
  • 곽진영 (연세대학교 의과대학 영상의학과) ;
  • 최선형 (연세대학교 의과대학 영상의학과)
  • Received : 2009.12.31
  • Accepted : 2010.08.02
  • Published : 2010.10.01

Abstract

Purpose: To analyze the interobserver variability of radiologists in their description and final assessment categories of microcalcifications in digital magnification mammographies. Materials and Methods: From 2005 to 2006, five radiologists analyzed 66 lesion microcalcifications in 65 patients on digital magnification mammographies using a blind method and including 40 benign and 26 malignant lesions. Each observer evaluated the microcalfication morphology, distribution, and BIRADS$^{(R)}$ category. Using the kappa value, the degree of interobserver agreement was calculated and the rate of malignancy was assessed. Results: The mean kappa value for microcalcification morphology was 0.19, which was considered to be moderate agreement for the microcalcification distribution (k: 0.54). The overall rate of malignancy was 39% for microcalcification morphology and distribution. Among them, amorphous microcalcifications showed the lowest rate of malignancy (17%). The mean kappa value for the final assessment categories of BI-RADS$^{(R)}$ was 0.29 and the mean rate of malignancy was 39%. Conclusion: Although there was slight interobserver variability, according to each of the descriptors, the general interobserver agreement in interpretation of microcalcification on digital magnification mammogram was slight to moderate. To improve interobserver agreement for the interpretation of microcalcifications, proper image quality control, standardization of criteria, and proper training of radiologists are needed.

목적: 디지털확대유방촬영술에서 미세석회화의 분석과 최종 결론에 대한 판독자들 간의 일치도를 평가하고자 하였다. 대상과 방법: 2005년부터 2006년까지 디지털확대유방촬영술을 실시한 65명의 환자에서 66예의 미세석회화를 대상으로 5명의 판독자들로 하여금 맹검법으로 판독하도록 하였으며 40예의 양성과 26예의 악성이 포함되었다. 미세석회화의 모양, 분포 및 BIRADS$^{(R)}$. 범주에 대해 판독한 후 kappa 값을 이용하여 판독자간 일치도과 악성 예측치를 평가하였다. 결과: 미세석회화 모양에 대한 일치도는 평균 Kappa 값이 0.19였으며, 분포에 대한 일치도는 평균 Kappa 값이 0.54로 중등도의 일치도 값을 보였다. 모양과 분포에 대한 총 악성 예측치는 39%를 보였다. 이 중 무정형 미세석회화는 악성 예측치가 17%로 가장 낮은 값을 보였다. BIRADS$^{(R)}$. 범주에 따른 판독자간의 일치도와 악성 예측치는 각각 Kappa 값이 0.29와 39%의 악성 예측치를 보였다. 결론: 디지털확대유방촬영술에서 미세석회화 분석의 일치도는 양상에 따라 약간의 차이를 보였으나 전반적으로 불량하거나 중등도 정도의 값을 보였다. 앞으로 미세석회화 분석의 일치도를 향상하기 위해 적절한 영상의 질 관리, 판독 기준의 표준화와 판독자들의 적절한 훈련이 필요하겠다.

Keywords

References

  1. American College of Radiology. Breast imaging reporting and data system, breast imaging atlas. 4th ed. Reston, VA: American College of Radiology, 2003
  2. Sickles EA, Doi K, Genant HK. Magnification film mammography: image quality and clinical studies. Radiology 1977;125:69-76 https://doi.org/10.1148/125.1.69
  3. Sickles EA. Further experience with microfocal spot magnification mammography in the assessment of clustered breast microcalcifications. Radiology 1980;137:9-14 https://doi.org/10.1148/radiology.137.1.7422866
  4. Obenauer S, Luftner-Nagel S, von Heyden D, Munzel U, Baum F, Grabbe E. Screen film vs full-field digital mammography: image quality, detectability and characterization of lesions. Eur Radiol 2002;12:1697-1702 https://doi.org/10.1007/s00330-001-1269-y
  5. Fischer U, Baum F, Obenauer S, Luftner-Nagel S, von Heyden D, Vosshenrich R, et al. Comparative study in patients with microcalcifications: full-field digital mammography vs screen-film mammography. Eur Radiol 2002;12:2679-2683
  6. Fischer U, Hermann KP, Baum F. Digital mammography: current state and future aspects. Eur Radiol 2006;16:38-44 https://doi.org/10.1007/s00330-005-2848-0
  7. 조소연, 최철순, 김호철, 최문혜, 김은아, 배상훈 등. X선 유방촬영술의 판독에 있어서 판독자간 일치율: 악성종양을 시사하는 소견을 중심으로. 대한방사선의학회지 1996;34:133-137
  8. 이경재, 이원철, 황인영, 김미혜, 김학희, 박용규 등. 유방촬영술의 판독자간 일치도. 대한영상의학회지 2004;51:351-356
  9. Lazarus E, Mainiero MB, Schepps B, Koelliker SL, Livingston LS. BI-RADS lexicon for US and mammography: interobserver variability and positive predictive value. Radiology 2006;239:385-391 https://doi.org/10.1148/radiol.2392042127
  10. Berg WA, Campassi C, Langenberg P, Sexton MJ. Breast Imaging Reporting and Data System: inter- and intraobserver variability in feature analysis and final assessment. AJR Am J Roentgenol 2000;174:1769-1777 https://doi.org/10.2214/ajr.174.6.1741769
  11. Cosar ZS, Cetin M, Tepe TK, Cetin R, Zarali AC. Concordance of mammographic classifications of microcalcifications in breast cancer diagnosis: utility of the breast imaging reporting and data system (fourth edition). Clin Imaging 2005;29:389-395
  12. Fleiss JL. Statistical methods for rates and proportions. 2nd ed. New York: John Wiley & Sons Inc., 1981
  13. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33:159-174 https://doi.org/10.2307/2529310
  14. Feig SA, Galkin BM, Muir HD. Evaluation of breast microcalcifications by means of optically magnified tissue specimen radiographs. Recent Results Cancer Res 1987;105:111-123
  15. Sickles EA. Mammographic features of 300 consecutive nonpalpable breast cancers. AJR Am J Roentgenol 1986;146:661-663 https://doi.org/10.2214/ajr.146.4.661
  16. Hermann KP, Obenauer S, Funke M, Grabbe EH. Magnification mammography: a comparison of full-field digital mammography and screen-film mammography for the detection of simulated small masses and microcalcifications. Eur Radiol 2002;12:2188- 2191 https://doi.org/10.1007/s00330-002-1356-8
  17. Orel SG, Kay N, Reynolds C, Sullivan DC. BI-RADS categorization as a predictor of malignancy. Radiology 1999;211:845-850 https://doi.org/10.1148/radiology.211.3.r99jn31845
  18. Bent CK, Bassett LW, D'Orsi CJ, Sayre JW. The positive predictive value of BI-RADS microcalcification descriptors and final assessment categories. AJR Am J Roentgenol 2010;194:1378-1383 https://doi.org/10.2214/AJR.09.3423
  19. Jiang Y, Nishikawa RM, Schmidt RA, Toledano AY, Doi K. Potential of computer-aided diagnosis to reduce variability in radiologists' interpretations of mammograms depicting microcalcifications. Radiology 2001;220:787-794 https://doi.org/10.1148/radiol.220001257
  20. Berg WA, D'Orsi CJ, Jackson VP, Bassett LW, Beam CA, Lewis RS, et al. Does training in the Breast Imaging Reporting and Data System (BI-RADS) improve biopsy recommendations or feature analysis agreement with experiencexperienced breast imagers at mammography? Radiology 2002;224:871-880 https://doi.org/10.1148/radiol.2243011626
  21. Blank RG, Wallis MG, Given-Wilson RM. Observer variability in cancer detection during routine repeat (incident) mammographic screening in a study of two versus one view mammography. J Med Screen 1999;6:152-158 https://doi.org/10.1136/jms.6.3.152
  22. Ciccone G, Vineis P, Frigerio A, Segnan N. Inter-observer and intra- observer variability of mammogram interpretation: a field study. Eur J Cancer 1992;28A:1054-1058