• 제목/요약/키워드: Dice coefficient

검색결과 67건 처리시간 0.038초

무릎 MR 영상에서 다중 아틀라스 기반 지역적 가중 투표 및 패치 기반 윤곽선 특징 분류를 통한 반월상 연골 자동 분할 (Automatic Meniscus Segmentation from Knee MR Images using Multi-atlas-based Locally-weighted Voting and Patch-based Edge Feature Classification)

  • 김순빈;김현진;홍헬렌;왕준호
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제24권4호
    • /
    • pp.29-38
    • /
    • 2018
  • 본 논문에서는 무릎 MR 영상에서 반월상 연골의 자동 위치화, 다중 아틀라스 기반 지역적 가중 투표를 통한 반월상 연골 분할 및 패치 기반 윤곽선 특징 분류를 통한 반월상 연골 자동 분할 방법을 제안한다. 첫째, 뼈와 무릎 관절 연골을 분할한 후 이를 이용하여 반월상 연골의 관심볼륨영역을 자동 위치화한다. 둘째, 반월상 연골의 관심볼륨영역에서 형상 및 밝기값 분포 가중치를 고려한 다중 아틀라스 기반 지역적 가중 투표를 통해 반월상 연골을 분할한다. 셋째, 밝기값이 유사한 측부 인대로의 누출을 제거하기 위해 형상 및 거리 가중치를 고려한 패치 기반 윤곽선 특징 분류를 통해 반월상 연골 분함을 개선한다. 제안 방법을 통한 분할 결과와 수동 분할 결과 간 다이스 유사계수는 내측 반월상 연골은 80.13%, 외측 반월상 연골은 80.81%를 보였으며 다중 아틀라스 기반 지역적 가중투표를 통한 분할 방법과 비교하여 내 측 및 외측 반월상 연 골 각각 7.25%, 1.31% 향상되었다.

부산지역에서 분리된 Salmonella enterica serovar Typhi균에 대한 PFGE를 이용한 Molecular typing (Molecular Typing of Salmonella enterica serovar Typhi Strains Isolated in Busan by Pulsed-Field Gel Electrophoresis)

  • 민상기;이주현;박은희;김정아;김규원
    • 생명과학회지
    • /
    • 제16권4호
    • /
    • pp.664-671
    • /
    • 2006
  • 1996년부터 2005년까지 부산지역에서 분리된 Salmonella enterica serovar Typhi 균주에 대한 항균제 내성 변화양상 및 Pulsed-field gel electrophoresis(PFGE)를 이용한 분리주의 분자역학적 형별을 분석하였다. 전체 424주에 대한 항균제 감수성 시험결과 multidrug-resistant (MDR) 6주(1.4%)와 nalidixic acid에만 내성을 보이는 2주를 제외한 나머지 416주(98.1%)가 시험 항균제 18종 모두에 감수성을 보였다. 부산지역 분리 장티푸스균의 유전적 이질성을 확인하고자 실시한 산발 분리 50주의 PFGE/XbaI 시험결과, 최소 32종의 다양한 패턴이 나타났다. 각 패턴별로 제한효소 절편 수는 13개에서 18개까지였고, 절편크기는 약 20 kb에서 630 kb 범위였다. 본 시험결과 부산지역의 장티푸스의 산발 또는 집단 발생시 PFGE는 유용한 역학적 지표로 사용가능함을 알 수 있었으며 또한 전국적 PulseNet 구축의 기초 자료로서 활용도가 높을 것으로 사료된다.

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection

  • Yitong Yu;Yang Gao;Jianyong Wei;Fangzhou Liao;Qianjiang Xiao;Jie Zhang;Weihua Yin;Bin Lu
    • Korean Journal of Radiology
    • /
    • 제22권2호
    • /
    • pp.168-178
    • /
    • 2021
  • Objective: To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). Materials and Methods: Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. Results: The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). Conclusion: The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.

Automated Measurement of Native T1 and Extracellular Volume Fraction in Cardiac Magnetic Resonance Imaging Using a Commercially Available Deep Learning Algorithm

  • Suyon Chang;Kyunghwa Han;Suji Lee;Young Joong Yang;Pan Ki Kim;Byoung Wook Choi;Young Joo Suh
    • Korean Journal of Radiology
    • /
    • 제23권12호
    • /
    • pp.1251-1259
    • /
    • 2022
  • Objective: T1 mapping provides valuable information regarding cardiomyopathies. Manual drawing is time consuming and prone to subjective errors. Therefore, this study aimed to test a DL algorithm for the automated measurement of native T1 and extracellular volume (ECV) fractions in cardiac magnetic resonance (CMR) imaging with a temporally separated dataset. Materials and Methods: CMR images obtained for 95 participants (mean age ± standard deviation, 54.5 ± 15.2 years), including 36 left ventricular hypertrophy (12 hypertrophic cardiomyopathy, 12 Fabry disease, and 12 amyloidosis), 32 dilated cardiomyopathy, and 27 healthy volunteers, were included. A commercial deep learning (DL) algorithm based on 2D U-net (Myomics-T1 software, version 1.0.0) was used for the automated analysis of T1 maps. Four radiologists, as study readers, performed manual analysis. The reference standard was the consensus result of the manual analysis by two additional expert readers. The segmentation performance of the DL algorithm and the correlation and agreement between the automated measurement and the reference standard were assessed. Interobserver agreement among the four radiologists was analyzed. Results: DL successfully segmented the myocardium in 99.3% of slices in the native T1 map and 89.8% of slices in the post-T1 map with Dice similarity coefficients of 0.86 ± 0.05 and 0.74 ± 0.17, respectively. Native T1 and ECV showed strong correlation and agreement between DL and the reference: for T1, r = 0.967 (95% confidence interval [CI], 0.951-0.978) and bias of 9.5 msec (95% limits of agreement [LOA], -23.6-42.6 msec); for ECV, r = 0.987 (95% CI, 0.980-0.991) and bias of 0.7% (95% LOA, -2.8%-4.2%) on per-subject basis. Agreements between DL and each of the four radiologists were excellent (intraclass correlation coefficient [ICC] of 0.98-0.99 for both native T1 and ECV), comparable to the pairwise agreement between the radiologists (ICC of 0.97-1.00 and 0.99-1.00 for native T1 and ECV, respectively). Conclusion: The DL algorithm allowed automated T1 and ECV measurements comparable to those of radiologists.

Deep learning-based automatic segmentation of the mandibular canal on panoramic radiographs: A multi-device study

  • Moe Thu Zar Aung;Sang-Heon Lim;Jiyong Han;Su Yang;Ju-Hee Kang;Jo-Eun Kim;Kyung-Hoe Huh;Won-Jin Yi;Min-Suk Heo;Sam-Sun Lee
    • Imaging Science in Dentistry
    • /
    • 제54권1호
    • /
    • pp.81-91
    • /
    • 2024
  • Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.

표면위치표지자를 적용한 정위적 부분유방방사선치료의 유용성 평가 (Evaluation of usefulness for Stereotactic Partial Breast Irradiation(S-PBI) by using Surface Fiducial Marker)

  • 김종열;정동민;김세영;유현종;최정환;박효국;백종걸;이상규;조정희
    • 대한방사선치료학회지
    • /
    • 제33권
    • /
    • pp.99-108
    • /
    • 2021
  • 목 적: 사이버나이프를 이용한 정위적 부분유방방사선치료(Stereotactic Partial Breast Irradiation) 시 기존의 침습적인 위치표지자(Fiducial Marker) 삽입 방식이 아닌 비침습방식에 대한 유용성을 평가 하고자 한다. 대상 및 방법: 본 연구를 위한 영상 중심(Imaging Center)의 일치도는 2D모의치료기와 사이버나이프의 양사방향(Both oblique, 45 °, 315 °) 영상 획득 후 다이스 유사 계수를 통해 정량적으로 평가하였다. 표면위치표지자의 위치 재현성은 본원 프로토콜을 기반으로 ATOM Phantom 표면에 금 재질의 위치표지자 8개를 부착하고 2D모의치료와 치료계획, 사이버나이프 영상을 분석하여 평가 하였다. 결 과: 영상 중심의 일치도 평가결과는 양사방향(45 °, 315 °)에서 각각 0.87, 0.9 였다. 표면위치표지자의 재현성 평가 결과 좌측 유방은 수평수직방향 Superior/Inferior 0.3 mm, Left/Right -0.3 mm, Anterior/Posterior 0.4 mm, 회전 방향 Roll 0.3 °, Pitch 0.2 °, Yaw 0.4 ° 로 나타났다. 우측 유방은 수평수직방향 Superior/Inferior -0.1 mm, Left/Right -0.1 mm, Anterior/Posterior -0.1 mm 회전 방향 Roll 0.2 °, Pitch 0.1 °, Yaw 0.1 ° 로 나타났다. 결 론: 사이버나이프를 이용한 정위적 부분유방암방사선치료 시 표면위치표지자의 비침습 트로토콜을 기반으로 통증 및 감염 등의 예방과 전처치 시간을 줄이고 환자의 경제적 부담을 경감 할 수 있었으며, 영상 중심의 높은 일치도 및 위치표지자의 재현성을 바탕으로 치료에 유용할 것으로 사료된다.

Generative Adversarial Network-Based Image Conversion Among Different Computed Tomography Protocols and Vendors: Effects on Accuracy and Variability in Quantifying Regional Disease Patterns of Interstitial Lung Disease

  • Hye Jeon Hwang;Hyunjong Kim;Joon Beom Seo;Jong Chul Ye;Gyutaek Oh;Sang Min Lee;Ryoungwoo Jang;Jihye Yun;Namkug Kim;Hee Jun Park;Ho Yun Lee;Soon Ho Yoon;Kyung Eun Shin;Jae Wook Lee;Woocheol Kwon;Joo Sung Sun;Seulgi You;Myung Hee Chung;Bo Mi Gil;Jae-Kwang Lim;Youkyung Lee;Su Jin Hong;Yo Won Choi
    • Korean Journal of Radiology
    • /
    • 제24권8호
    • /
    • pp.807-820
    • /
    • 2023
  • Objective: To assess whether computed tomography (CT) conversion across different scan parameters and manufacturers using a routable generative adversarial network (RouteGAN) can improve the accuracy and variability in quantifying interstitial lung disease (ILD) using a deep learning-based automated software. Materials and Methods: This study included patients with ILD who underwent thin-section CT. Unmatched CT images obtained using scanners from four manufacturers (vendors A-D), standard- or low-radiation doses, and sharp or medium kernels were classified into groups 1-7 according to acquisition conditions. CT images in groups 2-7 were converted into the target CT style (Group 1: vendor A, standard dose, and sharp kernel) using a RouteGAN. ILD was quantified on original and converted CT images using a deep learning-based software (Aview, Coreline Soft). The accuracy of quantification was analyzed using the dice similarity coefficient (DSC) and pixel-wise overlap accuracy metrics against manual quantification by a radiologist. Five radiologists evaluated quantification accuracy using a 10-point visual scoring system. Results: Three hundred and fifty CT slices from 150 patients (mean age: 67.6 ± 10.7 years; 56 females) were included. The overlap accuracies for quantifying total abnormalities in groups 2-7 improved after CT conversion (original vs. converted: 0.63 vs. 0.68 for DSC, 0.66 vs. 0.70 for pixel-wise recall, and 0.68 vs. 0.73 for pixel-wise precision; P < 0.002 for all). The DSCs of fibrosis score, honeycombing, and reticulation significantly increased after CT conversion (0.32 vs. 0.64, 0.19 vs. 0.47, and 0.23 vs. 0.54, P < 0.002 for all), whereas those of ground-glass opacity, consolidation, and emphysema did not change significantly or decreased slightly. The radiologists' scores were significantly higher (P < 0.001) and less variable on converted CT. Conclusion: CT conversion using a RouteGAN can improve the accuracy and variability of CT images obtained using different scan parameters and manufacturers in deep learning-based quantification of ILD.