• Title/Summary/Keyword: Dice Coefficient

Search Result 67, Processing Time 0.019 seconds

Automated Detection and Segmentation of Bone Metastases on Spine MRI Using U-Net: A Multicenter Study

  • Dong Hyun Kim;Jiwoon Seo;Ji Hyun Lee;Eun-Tae Jeon;DongYoung Jeong;Hee Dong Chae;Eugene Lee;Ji Hee Kang;Yoon-Hee Choi;Hyo Jin Kim;Jee Won Chai
    • Korean Journal of Radiology
    • /
    • v.25 no.4
    • /
    • pp.363-373
    • /
    • 2024
  • Objective: To develop and evaluate a deep learning model for automated segmentation and detection of bone metastasis on spinal MRI. Materials and Methods: We included whole spine MRI scans of adult patients with bone metastasis: 662 MRI series from 302 patients (63.5 ± 11.5 years; male:female, 151:151) from three study centers obtained between January 2015 and August 2021 for training and internal testing (random split into 536 and 126 series, respectively) and 49 MRI series from 20 patients (65.9 ± 11.5 years; male:female, 11:9) from another center obtained between January 2018 and August 2020 for external testing. Three sagittal MRI sequences, including non-contrast T1-weighted image (T1), contrast-enhanced T1-weighted Dixon fat-only image (FO), and contrast-enhanced fat-suppressed T1-weighted image (CE), were used. Seven models trained using the 2D and 3D U-Nets were developed with different combinations (T1, FO, CE, T1 + FO, T1 + CE, FO + CE, and T1 + FO + CE). The segmentation performance was evaluated using Dice coefficient, pixel-wise recall, and pixel-wise precision. The detection performance was analyzed using per-lesion sensitivity and a free-response receiver operating characteristic curve. The performance of the model was compared with that of five radiologists using the external test set. Results: The 2D U-Net T1 + CE model exhibited superior segmentation performance in the external test compared to the other models, with a Dice coefficient of 0.699 and pixel-wise recall of 0.653. The T1 + CE model achieved per-lesion sensitivities of 0.828 (497/600) and 0.857 (150/175) for metastases in the internal and external tests, respectively. The radiologists demonstrated a mean per-lesion sensitivity of 0.746 and a mean per-lesion positive predictive value of 0.701 in the external test. Conclusion: The deep learning models proposed for automated segmentation and detection of bone metastases on spinal MRI demonstrated high diagnostic performance.

Skin Lesion Image Segmentation Based on Adversarial Networks

  • Wang, Ning;Peng, Yanjun;Wang, Yuanhong;Wang, Meiling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2826-2840
    • /
    • 2018
  • Traditional methods based active contours or region merging are powerless in processing images with blurring border or hair occlusion. In this paper, a structure based convolutional neural networks is proposed to solve segmentation of skin lesion image. The structure mainly consists of two networks which are segmentation net and discrimination net. The segmentation net is designed based U-net that used to generate the mask of lesion, while the discrimination net is designed with only convolutional layers that used to determine whether input image is from ground truth labels or generated images. Images were obtained from "Skin Lesion Analysis Toward Melanoma Detection" challenge which was hosted by ISBI 2016 conference. We achieved segmentation average accuracy of 0.97, dice coefficient of 0.94 and Jaccard index of 0.89 which outperform the other existed state-of-the-art segmentation networks, including winner of ISBI 2016 challenge for skin melanoma segmentation.

Comparative Analysis of Cyanobacterial Communities from Polluted Reservoirs in Korea

  • Kim, Jin-Book;Moon, Mi-Sook;Lee, Dong-Hun;Lee, Sung-Taik;Bazzicalupo, Marco;Kim, Chi-Kyung
    • Journal of Microbiology
    • /
    • v.42 no.3
    • /
    • pp.181-187
    • /
    • 2004
  • Cyanobacteria are the dominant phototrophic bacteria in water environments. Here, the diversity of cyanobacteria in seven Korean reservoir waters where different levels of algal blooms were observed during the summer of 2002, was examined by T-RFLP analysis. The number of T-RF bands in the HaIII T-RFLP profiles analyzed from those water samples ranged from 20 to 44. Of these, cyanobacteria accounted for 6.1 to 27.2% of the total bacteria. The water samples could be clustered into 2 groups according to the Dice coefficient of the T -RF profiles. The eutrophic Dunpo and oligotrophic Chungju reservoirs were selected, and several representative clones from both reservoir waters analyzed for the nucleotide sequences of their 16S rDNA. The major clones were found to belong to the Microcystis and Anabaena species in the waters from the Dunpo and Chungju reservoirs, respectively, which was in agreement with the T-RFLP result. That is, the Microcystis and Anabaena species were dominant in the eutrophic and polluted Dunpo and oligotrophic Chungju reservoir waters, respectively. These results indicated that there is a correlation between prevalence of cyanobacterial species and levels of pollution in reservoir waters.

Three-Dimensional Visualization of Medical Image using Image Segmentation Algorithm based on Deep Learning (딥 러닝 기반의 영상분할 알고리즘을 이용한 의료영상 3차원 시각화에 관한 연구)

  • Lim, SangHeon;Kim, YoungJae;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.468-475
    • /
    • 2020
  • In this paper, we proposed a three-dimensional visualization system for medical images in augmented reality based on deep learning. In the proposed system, the artificial neural network model performed fully automatic segmentation of the region of lung and pulmonary nodule from chest CT images. After applying the three-dimensional volume rendering method to the segmented images, it was visualized in augmented reality devices. As a result of the experiment, when nodules were present in the region of lung, it could be easily distinguished with the naked eye. Also, the location and shape of the lesions were intuitively confirmed. The evaluation was accomplished by comparing automated segmentation results of the test dataset to the manual segmented image. Through the evaluation of the segmentation model, we obtained the region of lung DSC (Dice Similarity Coefficient) of 98.77%, precision of 98.45%, recall of 99.10%. And the region of pulmonary nodule DSC of 91.88%, precision of 93.05%, recall of 90.94%. If this proposed system will be applied in medical fields such as medical practice and medical education, it is expected that it can contribute to custom organ modeling, lesion analysis, and surgical education and training of patients.

A Computer Aided Diagnosis Algorithm for Classification of Malignant Melanoma based on Deep Learning (딥 러닝 기반의 악성흑색종 분류를 위한 컴퓨터 보조진단 알고리즘)

  • Lim, Sangheon;Lee, Myungsuk
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.4
    • /
    • pp.69-77
    • /
    • 2018
  • The malignant melanoma accounts for about 1 to 3% of the total malignant tumor in the West, especially in the US, it is a disease that causes more than 9,000 deaths each year. Generally, skin lesions are difficult to detect the features through photography. In this paper, we propose a computer-aided diagnosis algorithm based on deep learning for classification of malignant melanoma and benign skin tumor in RGB channel skin images. The proposed deep learning model configures the tumor lesion segmentation model and a classification model of malignant melanoma. First, U-Net was used to segment a skin lesion area in the dermoscopic image. We could implement algorithms to classify malignant melanoma and benign tumor using skin lesion image and results of expert's labeling in ResNet. The U-Net model obtained a dice similarity coefficient of 83.45% compared with results of expert's labeling. The classification accuracy of malignant melanoma obtained the 83.06%. As the result, it is expected that the proposed artificial intelligence algorithm will utilize as a computer-aided diagnosis algorithm and help to detect malignant melanoma at an early stage.

Detecting Boundary of Erythema Using Deep Learning (딥러닝을 활용한 피부 발적의 경계 판별)

  • Kwon, Gwanyoung;Kim, Jong Hoon;Kim, Young Jae;Lee, Sang Min;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1492-1499
    • /
    • 2021
  • Skin prick test is widely used in diagnosing allergic sensitization to common inhalant or food allergens, in which positivities are manually determined by calculating the areas or mean diameters of wheals and erythemas provoked by allergens pricked into patients' skin. In this work, we propose a segmentation algorithm over U-Net, one of the FCN models of deep learning, to help us more objectively grasp the erythema boundaries. The performance of the model is analyzed by comparing the results of automatic segmentation of the test data to U-Net with the results of manual segmentation. As a result, the average Dice coefficient value was 94.93%, the average precision and sensitivity value was 95.19% and 95.24% respectively. We find that the proposed algorithm effectively discriminates the skin's erythema boundaries. We expect this algorithm to play an auxiliary role in skin prick test in real clinical trials in the future.

A Comparative Performance Analysis of Segmentation Models for Lumbar Key-points Extraction (요추 특징점 추출을 위한 영역 분할 모델의 성능 비교 분석)

  • Seunghee Yoo;Minho Choi ;Jun-Su Jang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.354-361
    • /
    • 2023
  • Most of spinal diseases are diagnosed based on the subjective judgment of a specialist, so numerous studies have been conducted to find objectivity by automating the diagnosis process using deep learning. In this paper, we propose a method that combines segmentation and feature extraction, which are frequently used techniques for diagnosing spinal diseases. Four models, U-Net, U-Net++, DeepLabv3+, and M-Net were trained and compared using 1000 X-ray images, and key-points were derived using Douglas-Peucker algorithms. For evaluation, Dice Similarity Coefficient(DSC), Intersection over Union(IoU), precision, recall, and area under precision-recall curve evaluation metrics were used and U-Net++ showed the best performance in all metrics with an average DSC of 0.9724. For the average Euclidean distance between estimated key-points and ground truth, U-Net was the best, followed by U-Net++. However the difference in average distance was about 0.1 pixels, which is not significant. The results suggest that it is possible to extract key-points based on segmentation and that it can be used to accurately diagnose various spinal diseases, including spondylolisthesis, with consistent criteria.

A dual path encoder-decoder network for placental vessel segmentation in fetoscopic surgery

  • Yunbo Rao;Tian Tan;Shaoning Zeng;Zhanglin Chen;Jihong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.15-29
    • /
    • 2024
  • A fetoscope is an optical endoscope, which is often applied in fetoscopic laser photocoagulation to treat twin-to-twin transfusion syndrome. In an operation, the clinician needs to observe the abnormal placental vessels through the endoscope, so as to guide the operation. However, low-quality imaging and narrow field of view of the fetoscope increase the difficulty of the operation. Introducing an accurate placental vessel segmentation of fetoscopic images can assist the fetoscopic laser photocoagulation and help identify the abnormal vessels. This study proposes a method to solve the above problems. A novel encoder-decoder network with a dual-path structure is proposed to segment the placental vessels in fetoscopic images. In particular, we introduce a channel attention mechanism and a continuous convolution structure to obtain multi-scale features with their weights. Moreover, a switching connection is inserted between the corresponding blocks of the two paths to strengthen their relationship. According to the results of a set of blood vessel segmentation experiments conducted on a public fetoscopic image dataset, our method has achieved higher scores than the current mainstream segmentation methods, raising the dice similarity coefficient, intersection over union, and pixel accuracy by 5.80%, 8.39% and 0.62%, respectively.

Deep learning framework for bovine iris segmentation

  • Heemoon Yoon;Mira Park;Hayoung Lee;Jisoon An;Taehyun Lee;Sang-Hee Lee
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.167-177
    • /
    • 2024
  • Iris segmentation is an initial step for identifying the biometrics of animals when establishing a traceability system for livestock. In this study, we propose a deep learning framework for pixel-wise segmentation of bovine iris with a minimized use of annotation labels utilizing the BovineAAEyes80 public dataset. The proposed image segmentation framework encompasses data collection, data preparation, data augmentation selection, training of 15 deep neural network (DNN) models with varying encoder backbones and segmentation decoder DNNs, and evaluation of the models using multiple metrics and graphical segmentation results. This framework aims to provide comprehensive and in-depth information on each model's training and testing outcomes to optimize bovine iris segmentation performance. In the experiment, U-Net with a VGG16 backbone was identified as the optimal combination of encoder and decoder models for the dataset, achieving an accuracy and dice coefficient score of 99.50% and 98.35%, respectively. Notably, the selected model accurately segmented even corrupted images without proper annotation data. This study contributes to the advancement of iris segmentation and the establishment of a reliable DNN training framework.

Deep Learning based Skin Lesion Segmentation Using Transformer Block and Edge Decoder (트랜스포머 블록과 윤곽선 디코더를 활용한 딥러닝 기반의 피부 병변 분할 방법)

  • Kim, Ji Hoon;Park, Kyung Ri;Kim, Hae Moon;Moon, Young Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.533-540
    • /
    • 2022
  • Specialists diagnose skin cancer using a dermatoscopy to detect skin cancer as early as possible, but it is difficult to determine accurate skin lesions because skin lesions have various shapes. Recently, the skin lesion segmentation method using deep learning, which has shown high performance, has a problem in segmenting skin lesions because the boundary between healthy skin and skin lesions is not clear. To solve these issues, the proposed method constructs a transformer block to effectively segment the skin lesion, and constructs an edge decoder for each layer of the network to segment the skin lesion in detail. Experiment results have shown that the proposed method achieves a performance improvement of 0.041 ~ 0.071 for Dic Coefficient and 0.062 ~ 0.112 for Jaccard Index, compared with the previous method.