• Title/Summary/Keyword: Lesion Segmentation

Search Result 36, Processing Time 0.021 seconds

A Practical Implementation of Deep Learning Method for Supporting the Classification of Breast Lesions in Ultrasound Images

  • Han, Seokmin;Lee, Suchul;Lee, Jun-Rak
    • International journal of advanced smart convergence
    • /
    • v.8 no.1
    • /
    • pp.24-34
    • /
    • 2019
  • In this research, a practical deep learning framework to differentiate the lesions and nodules in breast acquired with ultrasound imaging has been proposed. 7408 ultrasound breast images of 5151 patient cases were collected. All cases were biopsy proven and lesions were semi-automatically segmented. To compensate for the shift caused in the segmentation, the boundaries of each lesion were drawn using Fully Convolutional Networks(FCN) segmentation method based on the radiologist's specified point. The data set consists of 4254 benign and 3154 malignant lesions. In 7408 ultrasound breast images, the number of training images is 6579, and the number of test images is 829. The margin between the boundary of each lesion and the boundary of the image itself varied for training image augmentation. The training images were augmented by varying the margin between the boundary of each lesion and the boundary of the image itself. The images were processed through histogram equalization, image cropping, and margin augmentation. The networks trained on the data with augmentation and the data without augmentation all had AUC over 0.95. The network exhibited about 90% accuracy, 0.86 sensitivity and 0.95 specificity. Although the proposed framework still requires to point to the location of the target ROI with the help of radiologists, the result of the suggested framework showed promising results. It supports human radiologist to give successful performance and helps to create a fluent diagnostic workflow that meets the fundamental purpose of CADx.

Mobile App for Detecting Canine Skin Diseases Using U-Net Image Segmentation (U-Net 기반 이미지 분할 및 병변 영역 식별을 활용한 반려견 피부질환 검출 모바일 앱)

  • Bo Kyeong Kim;Jae Yeon Byun;Kyung-Ae Cha
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.4
    • /
    • pp.25-34
    • /
    • 2024
  • This paper presents the development of a mobile application that detects and identifies canine skin diseases by training a deep learning-based U-Net model to infer the presence and location of skin lesions from images. U-Net, primarily used in medical imaging for image segmentation, is effective in distinguishing specific regions of an image in a polygonal form, making it suitable for identifying lesion areas in dogs. In this study, six major canine skin diseases were defined as classes, and the U-Net model was trained to differentiate among them. The model was then implemented in a mobile app, allowing users to perform lesion analysis and prediction through simple camera shots, with the results provided directly to the user. This enables pet owners to monitor the health of their pets and obtain information that aids in early diagnosis. By providing a quick and accurate diagnostic tool for pet health management through deep learning, this study emphasizes the significance of developing an easily accessible service for home use.

Multiple Sclerosis Lesion Detection using 3D Autoencoder in Brain Magnetic Resonance Images (3D 오토인코더 기반의 뇌 자기공명영상에서 다발성 경화증 병변 검출)

  • Choi, Wonjune;Park, Seongsu;Kim, Yunsoo;Gahm, Jin Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.979-987
    • /
    • 2021
  • Multiple Sclerosis (MS) can be early diagnosed by detecting lesions in brain magnetic resonance images (MRI). Unsupervised anomaly detection methods based on autoencoder have been recently proposed for automated detection of MS lesions. However, these autoencoder-based methods were developed only for 2D images (e.g. 2D cross-sectional slices) of MRI, so do not utilize the full 3D information of MRI. In this paper, therefore, we propose a novel 3D autoencoder-based framework for detection of the lesion volume of MS in MRI. We first define a 3D convolutional neural network (CNN) for full MRI volumes, and build each encoder and decoder layer of the 3D autoencoder based on 3D CNN. We also add a skip connection between the encoder and decoder layer for effective data reconstruction. In the experimental results, we compare the 3D autoencoder-based method with the 2D autoencoder models using the training datasets of 80 healthy subjects from the Human Connectome Project (HCP) and the testing datasets of 25 MS patients from the Longitudinal multiple sclerosis lesion segmentation challenge, and show that the proposed method achieves superior performance in prediction of MS lesion by up to 15%.

Multi-scale context fusion network for melanoma segmentation

  • Zhenhua Li;Lei Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1888-1906
    • /
    • 2024
  • Aiming at the problems that the edge of melanoma image is fuzzy, the contrast with the background is low, and the hair occlusion makes it difficult to segment accurately, this paper proposes a model MSCNet for melanoma segmentation based on U-net frame. Firstly, a multi-scale pyramid fusion module is designed to reconstruct the skip connection and transmit global information to the decoder. Secondly, the contextural information conduction module is innovatively added to the top of the encoder. The module provides different receptive fields for the segmented target by using the hole convolution with different expansion rates, so as to better fuse multi-scale contextural information. In addition, in order to suppress redundant information in the input image and pay more attention to melanoma feature information, global channel attention mechanism is introduced into the decoder. Finally, In order to solve the problem of lesion class imbalance, this paper uses a combined loss function. The algorithm of this paper is verified on ISIC 2017 and ISIC 2018 public datasets. The experimental results indicate that the proposed algorithm has better accuracy for melanoma segmentation compared with other CNN-based image segmentation algorithms.

Deep Learning-Based Lumen and Vessel Segmentation of Intravascular Ultrasound Images in Coronary Artery Disease

  • Gyu-Jun Jeong;Gaeun Lee;June-Goo Lee;Soo-Jin Kang
    • Korean Circulation Journal
    • /
    • v.54 no.1
    • /
    • pp.30-39
    • /
    • 2024
  • Background and Objectives: Intravascular ultrasound (IVUS) evaluation of coronary artery morphology is based on the lumen and vessel segmentation. This study aimed to develop an automatic segmentation algorithm and validate the performances for measuring quantitative IVUS parameters. Methods: A total of 1,063 patients were randomly assigned, with a ratio of 4:1 to the training and test sets. The independent data set of 111 IVUS pullbacks was obtained to assess the vessel-level performance. The lumen and external elastic membrane (EEM) boundaries were labeled manually in every IVUS frame with a 0.2-mm interval. The Efficient-UNet was utilized for the automatic segmentation of IVUS images. Results: At the frame-level, Efficient-UNet showed a high dice similarity coefficient (DSC, 0.93±0.05) and Jaccard index (JI, 0.87±0.08) for lumen segmentation, and demonstrated a high DSC (0.97±0.03) and JI (0.94±0.04) for EEM segmentation. At the vessel-level, there were close correlations between model-derived vs. experts-measured IVUS parameters; minimal lumen image area (r=0.92), EEM area (r=0.88), lumen volume (r=0.99) and plaque volume (r=0.95). The agreement between model-derived vs. expert-measured minimal lumen area was similarly excellent compared to the experts' agreement. The model-based lumen and EEM segmentation for a 20-mm lesion segment required 13.2 seconds, whereas manual segmentation with a 0.2-mm interval by an expert took 187.5 minutes on average. Conclusions: The deep learning models can accurately and quickly delineate vascular geometry. The artificial intelligence-based methodology may support clinicians' decision-making by real-time application in the catheterization laboratory.

Three-Dimensional Visualization of Medical Image using Image Segmentation Algorithm based on Deep Learning (딥 러닝 기반의 영상분할 알고리즘을 이용한 의료영상 3차원 시각화에 관한 연구)

  • Lim, SangHeon;Kim, YoungJae;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.468-475
    • /
    • 2020
  • In this paper, we proposed a three-dimensional visualization system for medical images in augmented reality based on deep learning. In the proposed system, the artificial neural network model performed fully automatic segmentation of the region of lung and pulmonary nodule from chest CT images. After applying the three-dimensional volume rendering method to the segmented images, it was visualized in augmented reality devices. As a result of the experiment, when nodules were present in the region of lung, it could be easily distinguished with the naked eye. Also, the location and shape of the lesions were intuitively confirmed. The evaluation was accomplished by comparing automated segmentation results of the test dataset to the manual segmented image. Through the evaluation of the segmentation model, we obtained the region of lung DSC (Dice Similarity Coefficient) of 98.77%, precision of 98.45%, recall of 99.10%. And the region of pulmonary nodule DSC of 91.88%, precision of 93.05%, recall of 90.94%. If this proposed system will be applied in medical fields such as medical practice and medical education, it is expected that it can contribute to custom organ modeling, lesion analysis, and surgical education and training of patients.

Boundary and Reverse Attention Module for Lung Nodule Segmentation in CT Images (CT 영상에서 폐 결절 분할을 위한 경계 및 역 어텐션 기법)

  • Hwang, Gyeongyeon;Ji, Yewon;Yoon, Hakyoung;Lee, Sang Jun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.265-272
    • /
    • 2022
  • As the risk of lung cancer has increased, early-stage detection and treatment of cancers have received a lot of attention. Among various medical imaging approaches, computer tomography (CT) has been widely utilized to examine the size and growth rate of lung nodules. However, the process of manual examination is a time-consuming task, and it causes physical and mental fatigue for medical professionals. Recently, many computer-aided diagnostic methods have been proposed to reduce the workload of medical professionals. In recent studies, encoder-decoder architectures have shown reliable performances in medical image segmentation, and it is adopted to predict lesion candidates. However, localizing nodules in lung CT images is a challenging problem due to the extremely small sizes and unstructured shapes of nodules. To solve these problems, we utilize atrous spatial pyramid pooling (ASPP) to minimize the loss of information for a general U-Net baseline model to extract rich representations from various receptive fields. Moreover, we propose mixed-up attention mechanism of reverse, boundary and convolutional block attention module (CBAM) to improve the accuracy of segmentation small scale of various shapes. The performance of the proposed model is compared with several previous attention mechanisms on the LIDC-IDRI dataset, and experimental results demonstrate that reverse, boundary, and CBAM (RB-CBAM) are effective in the segmentation of small nodules.

Algorithm for Extract Region of Interest Using Fast Binary Image Processing (고속 이진화 영상처리를 이용한 관심영역 추출 알고리즘)

  • Cho, Young-bok;Woo, Sung-hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.4
    • /
    • pp.634-640
    • /
    • 2018
  • In this paper, we propose an automatic extraction algorithm of region of interest(ROI) based on medical x-ray images. The proposed algorithm uses segmentation, feature extraction, and reference image matching to detect lesion sites in the input image. The extracted region is searched for matching lesion images in the reference DB, and the matched results are automatically extracted using the Kalman filter based fitness feedback. The proposed algorithm is extracts the contour of the left hand image for extract growth plate based on the left x-ray input image. It creates a candidate region using multi scale Hessian-matrix based sessionization. As a result, the proposed algorithm was able to split rapidly in 0.02 seconds during the ROI segmentation phase, also when extracting ROI based on segmented image 0.53, the reinforcement phase was able to perform very accurate image segmentation in 0.49 seconds.

Deep learning-based apical lesion segmentation from panoramic radiographs

  • Il-Seok, Song;Hak-Kyun, Shin;Ju-Hee, Kang;Jo-Eun, Kim;Kyung-Hoe, Huh;Won-Jin, Yi;Sam-Sun, Lee;Min-Suk, Heo
    • Imaging Science in Dentistry
    • /
    • v.52 no.4
    • /
    • pp.351-357
    • /
    • 2022
  • Purpose: Convolutional neural networks (CNNs) have rapidly emerged as one of the most promising artificial intelligence methods in the field of medical and dental research. CNNs can provide an effective diagnostic methodology allowing for the detection of early-staged diseases. Therefore, this study aimed to evaluate the performance of a deep CNN algorithm for apical lesion segmentation from panoramic radiographs. Materials and Methods: A total of 1000 panoramic images showing apical lesions were separated into training (n=800, 80%), validation (n=100, 10%), and test (n=100, 10%) datasets. The performance of identifying apical lesions was evaluated by calculating the precision, recall, and F1-score. Results: In the test group of 180 apical lesions, 147 lesions were segmented from panoramic radiographs with an intersection over union (IoU) threshold of 0.3. The F1-score values, as a measure of performance, were 0.828, 0.815, and 0.742, respectively, with IoU thresholds of 0.3, 0.4, and 0.5. Conclusion: This study showed the potential utility of a deep learning-guided approach for the segmentation of apical lesions. The deep CNN algorithm using U-Net demonstrated considerably high performance in detecting apical lesions.

Segmentation of MR Brain Image Using Scale Space Filtering and Fuzzy Clustering (스케일 스페이스 필터링과 퍼지 클러스터링을 이용한 뇌 자기공명영상의 분할)

  • 윤옥경;김동휘;박길흠
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.4
    • /
    • pp.339-346
    • /
    • 2000
  • Medical image is analyzed to get an anatomical information for diagnostics. Segmentation must be preceded to recognize and determine the lesion more accurately. In this paper, we propose automatic segmentation algorithm for MR brain images using T1-weighted, T2-weighted and PD images complementarily. The proposed segmentation algorithm is first, extracts cerebrum images from 3 input images using cerebrum mask which is made from PD image. And next, find 3D clusters corresponded to cerebrum tissues using scale filtering and 3D clustering in 3D space which is consisted of T1, T2, and PD axis. Cerebrum images are segmented using FCM algorithm with its initial centroid as the 3D cluster's centroid. The proposed algorithm improved segmentation results using accurate cluster centroid as initial value of FCM algorithm and also can get better segmentation results using multi spectral analysis than single spectral analysis.

  • PDF