• 제목/요약/키워드: Dice Coefficient

검색결과 69건 처리시간 0.019초

전립선암 영상유도 방사선 치료시 골반내장기의 체적변화에 따른 표적장기의 변화 (Inter-fractional Target Displacement in the Prostate Image-Guided Radiotherapy using Cone Beam Computed Tomography)

  • 동갑상;백창욱;정윤정;배재범;최영은;성기훈
    • 대한방사선치료학회지
    • /
    • 제28권2호
    • /
    • pp.161-169
    • /
    • 2016
  • 목 적 : 전립선암 방사선치료에서 방광과 직장의 체적변화에 따른 전립선의 위치 및 모양변화를 파악하여 이들 변화가 표적체적에 미치는 영향을 파악하고자 하였다. 대상 및 방법 : 본원 전립선암 protocol에 따라 방사선치료를 시행한 환자 6명을 대상으로 후향적 영상자료분석 및 윤곽설정을 시행하였다. 설계용 computed tomography (pCT)와 분할치료 시 획득된 cone-beam CT (CBCT)를 이용하여 전립선, 방광, 그리고 직장의 윤곽을 설정(contouring)하였다. 두 영상자료의 골격구조 기반 조사영역맞춤을 통해 전립선의 위치변화를 관찰하였으며, 전립선의 위치변화, 모양변형, 그리고 크기변화를 종합적으로 분석하기 위해 Dice similarity coefficient(DSC)를 이용하였다. 결 과 : 전립선의 체적은 pCT에서 평균 37.2cm3 로 측정되었으며 약 5% 이내의 크기변화를 나타내었고, 전립선의 DSC는 평균 89.9%로 환자마다 다양한 분포양상이 관측되었다. 방광의 체적변화에 따른 전립선의 DSC 변화를 상관분석한 결과 관련성을 찾을 수 없었지만(r=-0.084,p=0.268), 방광체적의 증감에 따른 층화분석 시 방광의 체적이 증가한 경우에서 DSC 와 방광변화량 간에 통계적으로 유의한 음의 상관관계를 관찰할 수 있었다(r=-0.230,p=0.049). 직장의 체적변화에 따른 전립선의 DSC 변화를 분석한 결과 직장의 체적변화가 증가함에 따라 DSC가 감소하는 것으로 나타났다(r=-0.162,p=0.032). 직장체적에 대한 층화분석에서는 체적이 pCT보다 증가한 경우에 강한 상관관계를 나타내었다 (r=-0.240,p=0.020). 결 론 : 방광과 직장의 체적을 일정하게 유지하는 것이 치료의 정확도를 보장하는 것은 아닌 것으로 나타났다. 따라서 전립선암의 방사선치료 시 CBCT를 이용한 연조직 기반의 조사영역맞춤이 중요하며, 직장풍선(rectal balloon) 등을 이용한 체적관리가 치료정확도를 유지하는데 역할을 할 것으로 사료된다.

  • PDF

Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation

  • Seul Bi Lee;Youngtaek Hong;Yeon Jin Cho;Dawun Jeong;Jina Lee;Soon Ho Yoon;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon
    • Korean Journal of Radiology
    • /
    • 제24권4호
    • /
    • pp.294-304
    • /
    • 2023
  • Objective: We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. Materials and Methods: We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. Results: The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). Conclusion: Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.

무릎 MR 영상에서 다중 아틀라스 기반 지역적 가중 투표 및 패치 기반 윤곽선 특징 분류를 통한 반월상 연골 자동 분할 (Automatic Meniscus Segmentation from Knee MR Images using Multi-atlas-based Locally-weighted Voting and Patch-based Edge Feature Classification)

  • 김순빈;김현진;홍헬렌;왕준호
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제24권4호
    • /
    • pp.29-38
    • /
    • 2018
  • 본 논문에서는 무릎 MR 영상에서 반월상 연골의 자동 위치화, 다중 아틀라스 기반 지역적 가중 투표를 통한 반월상 연골 분할 및 패치 기반 윤곽선 특징 분류를 통한 반월상 연골 자동 분할 방법을 제안한다. 첫째, 뼈와 무릎 관절 연골을 분할한 후 이를 이용하여 반월상 연골의 관심볼륨영역을 자동 위치화한다. 둘째, 반월상 연골의 관심볼륨영역에서 형상 및 밝기값 분포 가중치를 고려한 다중 아틀라스 기반 지역적 가중 투표를 통해 반월상 연골을 분할한다. 셋째, 밝기값이 유사한 측부 인대로의 누출을 제거하기 위해 형상 및 거리 가중치를 고려한 패치 기반 윤곽선 특징 분류를 통해 반월상 연골 분함을 개선한다. 제안 방법을 통한 분할 결과와 수동 분할 결과 간 다이스 유사계수는 내측 반월상 연골은 80.13%, 외측 반월상 연골은 80.81%를 보였으며 다중 아틀라스 기반 지역적 가중투표를 통한 분할 방법과 비교하여 내 측 및 외측 반월상 연 골 각각 7.25%, 1.31% 향상되었다.

부산지역에서 분리된 Salmonella enterica serovar Typhi균에 대한 PFGE를 이용한 Molecular typing (Molecular Typing of Salmonella enterica serovar Typhi Strains Isolated in Busan by Pulsed-Field Gel Electrophoresis)

  • 민상기;이주현;박은희;김정아;김규원
    • 생명과학회지
    • /
    • 제16권4호
    • /
    • pp.664-671
    • /
    • 2006
  • 1996년부터 2005년까지 부산지역에서 분리된 Salmonella enterica serovar Typhi 균주에 대한 항균제 내성 변화양상 및 Pulsed-field gel electrophoresis(PFGE)를 이용한 분리주의 분자역학적 형별을 분석하였다. 전체 424주에 대한 항균제 감수성 시험결과 multidrug-resistant (MDR) 6주(1.4%)와 nalidixic acid에만 내성을 보이는 2주를 제외한 나머지 416주(98.1%)가 시험 항균제 18종 모두에 감수성을 보였다. 부산지역 분리 장티푸스균의 유전적 이질성을 확인하고자 실시한 산발 분리 50주의 PFGE/XbaI 시험결과, 최소 32종의 다양한 패턴이 나타났다. 각 패턴별로 제한효소 절편 수는 13개에서 18개까지였고, 절편크기는 약 20 kb에서 630 kb 범위였다. 본 시험결과 부산지역의 장티푸스의 산발 또는 집단 발생시 PFGE는 유용한 역학적 지표로 사용가능함을 알 수 있었으며 또한 전국적 PulseNet 구축의 기초 자료로서 활용도가 높을 것으로 사료된다.

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection

  • Yitong Yu;Yang Gao;Jianyong Wei;Fangzhou Liao;Qianjiang Xiao;Jie Zhang;Weihua Yin;Bin Lu
    • Korean Journal of Radiology
    • /
    • 제22권2호
    • /
    • pp.168-178
    • /
    • 2021
  • Objective: To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). Materials and Methods: Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. Results: The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). Conclusion: The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.

Automated Measurement of Native T1 and Extracellular Volume Fraction in Cardiac Magnetic Resonance Imaging Using a Commercially Available Deep Learning Algorithm

  • Suyon Chang;Kyunghwa Han;Suji Lee;Young Joong Yang;Pan Ki Kim;Byoung Wook Choi;Young Joo Suh
    • Korean Journal of Radiology
    • /
    • 제23권12호
    • /
    • pp.1251-1259
    • /
    • 2022
  • Objective: T1 mapping provides valuable information regarding cardiomyopathies. Manual drawing is time consuming and prone to subjective errors. Therefore, this study aimed to test a DL algorithm for the automated measurement of native T1 and extracellular volume (ECV) fractions in cardiac magnetic resonance (CMR) imaging with a temporally separated dataset. Materials and Methods: CMR images obtained for 95 participants (mean age ± standard deviation, 54.5 ± 15.2 years), including 36 left ventricular hypertrophy (12 hypertrophic cardiomyopathy, 12 Fabry disease, and 12 amyloidosis), 32 dilated cardiomyopathy, and 27 healthy volunteers, were included. A commercial deep learning (DL) algorithm based on 2D U-net (Myomics-T1 software, version 1.0.0) was used for the automated analysis of T1 maps. Four radiologists, as study readers, performed manual analysis. The reference standard was the consensus result of the manual analysis by two additional expert readers. The segmentation performance of the DL algorithm and the correlation and agreement between the automated measurement and the reference standard were assessed. Interobserver agreement among the four radiologists was analyzed. Results: DL successfully segmented the myocardium in 99.3% of slices in the native T1 map and 89.8% of slices in the post-T1 map with Dice similarity coefficients of 0.86 ± 0.05 and 0.74 ± 0.17, respectively. Native T1 and ECV showed strong correlation and agreement between DL and the reference: for T1, r = 0.967 (95% confidence interval [CI], 0.951-0.978) and bias of 9.5 msec (95% limits of agreement [LOA], -23.6-42.6 msec); for ECV, r = 0.987 (95% CI, 0.980-0.991) and bias of 0.7% (95% LOA, -2.8%-4.2%) on per-subject basis. Agreements between DL and each of the four radiologists were excellent (intraclass correlation coefficient [ICC] of 0.98-0.99 for both native T1 and ECV), comparable to the pairwise agreement between the radiologists (ICC of 0.97-1.00 and 0.99-1.00 for native T1 and ECV, respectively). Conclusion: The DL algorithm allowed automated T1 and ECV measurements comparable to those of radiologists.

Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study

  • Moe Thu Zar Aung;Sang-Heon Lim;Jiyong Han;Su Yang;Ju-Hee Kang;Jo-Eun Kim;Kyung-Hoe Huh;Won-Jin Yi;Min-Suk Heo;Sam-Sun Lee
    • Imaging Science in Dentistry
    • /
    • 제54권1호
    • /
    • pp.81-91
    • /
    • 2024
  • Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.

표면위치표지자를 적용한 정위적 부분유방방사선치료의 유용성 평가 (Evaluation of usefulness for Stereotactic Partial Breast Irradiation(S-PBI) by using Surface Fiducial Marker)

  • 김종열;정동민;김세영;유현종;최정환;박효국;백종걸;이상규;조정희
    • 대한방사선치료학회지
    • /
    • 제33권
    • /
    • pp.99-108
    • /
    • 2021
  • 목 적: 사이버나이프를 이용한 정위적 부분유방방사선치료(Stereotactic Partial Breast Irradiation) 시 기존의 침습적인 위치표지자(Fiducial Marker) 삽입 방식이 아닌 비침습방식에 대한 유용성을 평가 하고자 한다. 대상 및 방법: 본 연구를 위한 영상 중심(Imaging Center)의 일치도는 2D모의치료기와 사이버나이프의 양사방향(Both oblique, 45 °, 315 °) 영상 획득 후 다이스 유사 계수를 통해 정량적으로 평가하였다. 표면위치표지자의 위치 재현성은 본원 프로토콜을 기반으로 ATOM Phantom 표면에 금 재질의 위치표지자 8개를 부착하고 2D모의치료와 치료계획, 사이버나이프 영상을 분석하여 평가 하였다. 결 과: 영상 중심의 일치도 평가결과는 양사방향(45 °, 315 °)에서 각각 0.87, 0.9 였다. 표면위치표지자의 재현성 평가 결과 좌측 유방은 수평수직방향 Superior/Inferior 0.3 mm, Left/Right -0.3 mm, Anterior/Posterior 0.4 mm, 회전 방향 Roll 0.3 °, Pitch 0.2 °, Yaw 0.4 ° 로 나타났다. 우측 유방은 수평수직방향 Superior/Inferior -0.1 mm, Left/Right -0.1 mm, Anterior/Posterior -0.1 mm 회전 방향 Roll 0.2 °, Pitch 0.1 °, Yaw 0.1 ° 로 나타났다. 결 론: 사이버나이프를 이용한 정위적 부분유방암방사선치료 시 표면위치표지자의 비침습 트로토콜을 기반으로 통증 및 감염 등의 예방과 전처치 시간을 줄이고 환자의 경제적 부담을 경감 할 수 있었으며, 영상 중심의 높은 일치도 및 위치표지자의 재현성을 바탕으로 치료에 유용할 것으로 사료된다.

Generative Adversarial Network-Based Image Conversion Among Different Computed Tomography Protocols and Vendors: Effects on Accuracy and Variability in Quantifying Regional Disease Patterns of Interstitial Lung Disease

  • Hye Jeon Hwang;Hyunjong Kim;Joon Beom Seo;Jong Chul Ye;Gyutaek Oh;Sang Min Lee;Ryoungwoo Jang;Jihye Yun;Namkug Kim;Hee Jun Park;Ho Yun Lee;Soon Ho Yoon;Kyung Eun Shin;Jae Wook Lee;Woocheol Kwon;Joo Sung Sun;Seulgi You;Myung Hee Chung;Bo Mi Gil;Jae-Kwang Lim;Youkyung Lee;Su Jin Hong;Yo Won Choi
    • Korean Journal of Radiology
    • /
    • 제24권8호
    • /
    • pp.807-820
    • /
    • 2023
  • Objective: To assess whether computed tomography (CT) conversion across different scan parameters and manufacturers using a routable generative adversarial network (RouteGAN) can improve the accuracy and variability in quantifying interstitial lung disease (ILD) using a deep learning-based automated software. Materials and Methods: This study included patients with ILD who underwent thin-section CT. Unmatched CT images obtained using scanners from four manufacturers (vendors A-D), standard- or low-radiation doses, and sharp or medium kernels were classified into groups 1-7 according to acquisition conditions. CT images in groups 2-7 were converted into the target CT style (Group 1: vendor A, standard dose, and sharp kernel) using a RouteGAN. ILD was quantified on original and converted CT images using a deep learning-based software (Aview, Coreline Soft). The accuracy of quantification was analyzed using the dice similarity coefficient (DSC) and pixel-wise overlap accuracy metrics against manual quantification by a radiologist. Five radiologists evaluated quantification accuracy using a 10-point visual scoring system. Results: Three hundred and fifty CT slices from 150 patients (mean age: 67.6 ± 10.7 years; 56 females) were included. The overlap accuracies for quantifying total abnormalities in groups 2-7 improved after CT conversion (original vs. converted: 0.63 vs. 0.68 for DSC, 0.66 vs. 0.70 for pixel-wise recall, and 0.68 vs. 0.73 for pixel-wise precision; P < 0.002 for all). The DSCs of fibrosis score, honeycombing, and reticulation significantly increased after CT conversion (0.32 vs. 0.64, 0.19 vs. 0.47, and 0.23 vs. 0.54, P < 0.002 for all), whereas those of ground-glass opacity, consolidation, and emphysema did not change significantly or decreased slightly. The radiologists' scores were significantly higher (P < 0.001) and less variable on converted CT. Conclusion: CT conversion using a RouteGAN can improve the accuracy and variability of CT images obtained using different scan parameters and manufacturers in deep learning-based quantification of ILD.