• Title/Summary/Keyword: DeepU-Net

Search Result 178, Processing Time 0.026 seconds

Attention U-Net Based Palm Line Segmentation for Biometrics (생체인식을 위한 Attention U-Net 기반 손금 추출 기법)

  • Kim, InKi;Kim, Beomjun;Gwak, Jeonghwan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.89-91
    • /
    • 2022
  • 본 논문에서는 생체인식 수단 중 하나인 손금을 이용한 생체인식에서 Attention U-Net을 기반으로 손금을 추출하는 방법을 제안한다. 손바닥의 손금 중 주요선이라 불리는 생명선, 지능선, 감정선은 거의 변하지 않는 특징을 가지고 있다. 기존의 손금 추출 방법인 비슷한 색상에서 손금 추출, 제한된 Background에서 손금을 추출하는 것이 아닌 피부색과 비슷하거나, 다양한 Background에서 적용될 수 있다. 이를 통해 사용자를 인식하는 생체인식 방법에서 사용할 수 있다. 본 논문에서 사용된 Attention U-Net의 특징을 통해 손금의 Segmentation 영역을 Attention Coefficient를 업데이트하며 효율적으로 학습할 수 있음을 확인하였다.

  • PDF

Accuracy analysis of Multi-series Phenological Landcover Classification Using U-Net-based Deep Learning Model - Focusing on the Seoul, Republic of Korea - (U-Net 기반 딥러닝 모델을 이용한 다중시기 계절학적 토지피복 분류 정확도 분석 - 서울지역을 중심으로 -)

  • Kim, Joon;Song, Yongho;Lee, Woo-Kyun
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.409-418
    • /
    • 2021
  • The land cover map is a very important data that is used as a basis for decision-making for land policy and environmental policy. The land cover map is mapped using remote sensing data, and the classification results may vary depending on the acquisition time of the data used even for the same area. In this study, to overcome the classification accuracy limit of single-period data, multi-series satellite images were used to learn the difference in the spectral reflectance characteristics of the land surface according to seasons on a U-Net model, one of the deep learning algorithms, to improve classification accuracy. In addition, the degree of improvement in classification accuracy is compared by comparing the accuracy of single-period data. Seoul, which consists of various land covers including 30% of green space and the Han River within the area, was set as the research target and quarterly Sentinel-2 satellite images for 2020 were aquired. The U-Net model was trained using the sub-class land cover map mapped by the Korean Ministry of Environment. As a result of learning and classifying the model into single-period, double-series, triple-series, and quadruple-series through the learned U-Net model, it showed an accuracy of 81%, 82% and 79%, which exceeds the standard for securing land cover classification accuracy of 75%, except for a single-period. Through this, it was confirmed that classification accuracy can be improved through multi-series classification.

Fast and Accurate Single Image Super-Resolution via Enhanced U-Net

  • Chang, Le;Zhang, Fan;Li, Biao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1246-1262
    • /
    • 2021
  • Recent studies have demonstrated the strong ability of deep convolutional neural networks (CNNs) to significantly boost the performance in single image super-resolution (SISR). The key concern is how to efficiently recover and utilize diverse information frequencies across multiple network layers, which is crucial to satisfying super-resolution image reconstructions. Hence, previous work made great efforts to potently incorporate hierarchical frequencies through various sophisticated architectures. Nevertheless, economical SISR also requires a capable structure design to balance between restoration accuracy and computational complexity, which is still a challenge for existing techniques. In this paper, we tackle this problem by proposing a competent architecture called Enhanced U-Net Network (EUN), which can yield ready-to-use features in miscellaneous frequencies and combine them comprehensively. In particular, the proposed building block for EUN is enhanced from U-Net, which can extract abundant information via multiple skip concatenations. The network configuration allows the pipeline to propagate information from lower layers to higher ones. Meanwhile, the block itself is committed to growing quite deep in layers, which empowers different types of information to spring from a single block. Furthermore, due to its strong advantage in distilling effective information, promising results are guaranteed with comparatively fewer filters. Comprehensive experiments manifest our model can achieve favorable performance over that of state-of-the-art methods, especially in terms of computational efficiency.

Comparison of Multi-Label U-Net and Mask R-CNN for panoramic radiograph segmentation to detect periodontitis

  • Rini, Widyaningrum;Ika, Candradewi;Nur Rahman Ahmad Seno, Aji;Rona, Aulianisa
    • Imaging Science in Dentistry
    • /
    • v.52 no.4
    • /
    • pp.383-391
    • /
    • 2022
  • Purpose: Periodontitis, the most prevalent chronic inflammatory condition affecting teeth-supporting tissues, is diagnosed and classified through clinical and radiographic examinations. The staging of periodontitis using panoramic radiographs provides information for designing computer-assisted diagnostic systems. Performing image segmentation in periodontitis is required for image processing in diagnostic applications. This study evaluated image segmentation for periodontitis staging based on deep learning approaches. Materials and Methods: Multi-Label U-Net and Mask R-CNN models were compared for image segmentation to detect periodontitis using 100 digital panoramic radiographs. Normal conditions and 4 stages of periodontitis were annotated on these panoramic radiographs. A total of 1100 original and augmented images were then randomly divided into a training (75%) dataset to produce segmentation models and a testing (25%) dataset to determine the evaluation metrics of the segmentation models. Results: The performance of the segmentation models against the radiographic diagnosis of periodontitis conducted by a dentist was described by evaluation metrics(i.e., dice coefficient and intersection-over-union [IoU] score). MultiLabel U-Net achieved a dice coefficient of 0.96 and an IoU score of 0.97. Meanwhile, Mask R-CNN attained a dice coefficient of 0.87 and an IoU score of 0.74. U-Net showed the characteristic of semantic segmentation, and Mask R-CNN performed instance segmentation with accuracy, precision, recall, and F1-score values of 95%, 85.6%, 88.2%, and 86.6%, respectively. Conclusion: Multi-Label U-Net produced superior image segmentation to that of Mask R-CNN. The authors recommend integrating it with other techniques to develop hybrid models for automatic periodontitis detection.

Deep Learning-based Spine Segmentation Technique Using the Center Point of the Spine and Modified U-Net (척추의 중심점과 Modified U-Net을 활용한 딥러닝 기반 척추 자동 분할)

  • Sungjoo Lim;Hwiyoung Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.139-146
    • /
    • 2023
  • Osteoporosis is a disease in which the risk of bone fractures increases due to a decrease in bone density caused by aging. Osteoporosis is diagnosed by measuring bone density in the total hip, femoral neck, and lumbar spine. To accurately measure bone density in the lumbar spine, the vertebral region must be segmented from the lumbar X-ray image. Deep learning-based automatic spinal segmentation methods can provide fast and precise information about the vertebral region. In this study, we used 695 lumbar spine images as training and test datasets for a deep learning segmentation model. We proposed a lumbar automatic segmentation model, CM-Net, which combines the center point of the spine and the modified U-Net network. As a result, the average Dice Similarity Coefficient(DSC) was 0.974, precision was 0.916, recall was 0.906, accuracy was 0.998, and Area under the Precision-Recall Curve (AUPRC) was 0.912. This study demonstrates a high-performance automatic segmentation model for lumbar X-ray images, which overcomes noise such as spinal fractures and implants. Furthermore, we can perform accurate measurement of bone density on lumbar X-ray images using an automatic segmentation methodology for the spine, which can prevent the risk of compression fractures at an early stage and improve the accuracy and efficiency of osteoporosis diagnosis.

Land Cover Classification of Satellite Image using SSResUnet Model (SSResUnet 모델을 이용한 위성 영상 토지피복분류)

  • Joohyung Kang;Minsung Kim;Seongjin Kim;Sooyeong Kwak
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.456-463
    • /
    • 2023
  • In this paper, we introduce the SSResUNet network model, which integrates the SPADE structure with the U-Net network model for accurate land cover classification using high-resolution satellite imagery without requiring user intervention. The proposed network possesses the advantage of preserving the spatial characteristics inherent in satellite imagery, rendering it a robust classification model even in intricate environments. Experimental results, obtained through training on KOMPSAT-3A satellite images, exhibit superior performance compared to conventional U-Net and U-Net++ models, showcasing an average Intersection over Union (IoU) of 76.10 and a Dice coefficient of 86.22.

Image Segmentation for Fire Prediction using Deep Learning (딥러닝을 이용한 화재 발생 예측 이미지 분할)

  • TaeHoon, Kim;JongJin, Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we used a deep learning model to detect and segment flame and smoke in real time from fires. To this end, well known U-NET was used to separate and divide the flame and smoke of the fire using multi-class. As a result of learning using the proposed technique, the values of loss error and accuracy are very good at 0.0486 and 0.97996, respectively. The IOU value used in object detection is also very good at 0.849. As a result of predicting fire images that were not used for learning using the learned model, the flame and smoke of fire are well detected and segmented, and smoke color were well distinguished. Proposed method can be used to build fire prediction and detection system.

A study on DEMONgram frequency line extraction method using deep learning (딥러닝을 이용한 DEMON 그램 주파수선 추출 기법 연구)

  • Wonsik Shin;Hyuckjong Kwon;Hoseok Sul;Won Shin;Hyunsuk Ko;Taek-Lyul Song;Da-Sol Kim;Kang-Hoon Choi;Jee Woong Choi
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.1
    • /
    • pp.78-88
    • /
    • 2024
  • Ship-radiated noise received by passive sonar that can measure underwater noise can be identified and classified ship using Detection of Envelope Modulation on Noise (DEMON) analysis. However, in a low Signal-to-Noise Ratio (SNR) environment, it is difficult to analyze and identify the target frequency line containing ship information in the DEMONgram. In this paper, we conducted a study to extract target frequency lines using semantic segmentation among deep learning techniques for more accurate target identification in a low SNR environment. The semantic segmentation models U-Net, UNet++, and DeepLabv3+ were trained and evaluated using simulated DEMONgram data generated by changing SNR and fundamental frequency, and the DEMONgram prediction performance of DeepShip, a dataset of ship-radiated noise recordings on the strait of Georgia in Canada, was compared using the trained models. As a result of evaluating the trained model with the simulated DEMONgram, it was confirmed that U-Net had the highest performance and that it was possible to extract the target frequency line of the DEMONgram made by DeepShip to some extent.

A Computer Aided Diagnosis Algorithm for Classification of Malignant Melanoma based on Deep Learning (딥 러닝 기반의 악성흑색종 분류를 위한 컴퓨터 보조진단 알고리즘)

  • Lim, Sangheon;Lee, Myungsuk
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.4
    • /
    • pp.69-77
    • /
    • 2018
  • The malignant melanoma accounts for about 1 to 3% of the total malignant tumor in the West, especially in the US, it is a disease that causes more than 9,000 deaths each year. Generally, skin lesions are difficult to detect the features through photography. In this paper, we propose a computer-aided diagnosis algorithm based on deep learning for classification of malignant melanoma and benign skin tumor in RGB channel skin images. The proposed deep learning model configures the tumor lesion segmentation model and a classification model of malignant melanoma. First, U-Net was used to segment a skin lesion area in the dermoscopic image. We could implement algorithms to classify malignant melanoma and benign tumor using skin lesion image and results of expert's labeling in ResNet. The U-Net model obtained a dice similarity coefficient of 83.45% compared with results of expert's labeling. The classification accuracy of malignant melanoma obtained the 83.06%. As the result, it is expected that the proposed artificial intelligence algorithm will utilize as a computer-aided diagnosis algorithm and help to detect malignant melanoma at an early stage.

Evaluation of the Feasibility of Deep Learning for Vegetation Monitoring (딥러닝 기반의 식생 모니터링 가능성 평가)

  • Kim, Dong-woo;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.6
    • /
    • pp.85-96
    • /
    • 2023
  • This study proposes a method for forest vegetation monitoring using high-resolution aerial imagery captured by unmanned aerial vehicles(UAV) and deep learning technology. The research site was selected in the forested area of Mountain Dogo, Asan City, Chungcheongnam-do, and the target species for monitoring included Pinus densiflora, Quercus mongolica, and Quercus acutissima. To classify vegetation species at the pixel level in UAV imagery based on characteristics such as leaf shape, size, and color, the study employed the semantic segmentation method using the prominent U-net deep learning model. The research results indicated that it was possible to visually distinguish Pinus densiflora Siebold & Zucc, Quercus mongolica Fisch. ex Ledeb, and Quercus acutissima Carruth in 135 aerial images captured by UAV. Out of these, 104 images were used as training data for the deep learning model, while 31 images were used for inference. The optimization of the deep learning model resulted in an overall average pixel accuracy of 92.60, with mIoU at 0.80 and FIoU at 0.82, demonstrating the successful construction of a reliable deep learning model. This study is significant as a pilot case for the application of UAV and deep learning to monitor and manage representative species among climate-vulnerable vegetation, including Pinus densiflora, Quercus mongolica, and Quercus acutissima. It is expected that in the future, UAV and deep learning models can be applied to a variety of vegetation species to better address forest management.