• Title/Summary/Keyword: UNet++

Search Result 53, Processing Time 0.028 seconds

Ensemble UNet 3+ for Medical Image Segmentation

  • JongJin, Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.269-274
    • /
    • 2023
  • In this paper, we proposed a new UNet 3+ model for medical image segmentation. The proposed ensemble(E) UNet 3+ model consists of UNet 3+s of varying depths into one unified architecture. UNet 3+s of varying depths have same encoder, but have their own decoders. They can bridge semantic gap between encoder and decoder nodes of UNet 3+. Deep supervision was used for learning on a total of 8 nodes of the E-UNet 3+ to improve performance. The proposed E-UNet 3+ model shows better segmentation results than those of the UNet 3+. As a result of the simulation, the E-UNet 3+ model using deep supervision was the best with loss function values of 0.8904 and 0.8562 for training and validation data. For the test data, the UNet 3+ model using deep supervision was the best with a value of 0.7406. Qualitative comparison of the simulation results shows the results of the proposed model are better than those of existing UNet 3+.

Performance Improvement Analysis of Building Extraction Deep Learning Model Based on UNet Using Transfer Learning at Different Learning Rates (전이학습을 이용한 UNet 기반 건물 추출 딥러닝 모델의 학습률에 따른 성능 향상 분석)

  • Chul-Soo Ye;Young-Man Ahn;Tae-Woong Baek;Kyung-Tae Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1111-1123
    • /
    • 2023
  • In recent times, semantic image segmentation methods using deep learning models have been widely used for monitoring changes in surface attributes using remote sensing imagery. To enhance the performance of various UNet-based deep learning models, including the prominent UNet model, it is imperative to have a sufficiently large training dataset. However, enlarging the training dataset not only escalates the hardware requirements for processing but also significantly increases the time required for training. To address these issues, transfer learning is used as an effective approach, enabling performance improvement of models even in the absence of massive training datasets. In this paper we present three transfer learning models, UNet-ResNet50, UNet-VGG19, and CBAM-DRUNet-VGG19, which are combined with the representative pretrained models of VGG19 model and ResNet50 model. We applied these models to building extraction tasks and analyzed the accuracy improvements resulting from the application of transfer learning. Considering the substantial impact of learning rate on the performance of deep learning models, we also analyzed performance variations of each model based on different learning rate settings. We employed three datasets, namely Kompsat-3A dataset, WHU dataset, and INRIA dataset for evaluating the performance of building extraction results. The average accuracy improvements for the three dataset types, in comparison to the UNet model, were 5.1% for the UNet-ResNet50 model, while both UNet-VGG19 and CBAM-DRUNet-VGG19 models achieved a 7.2% improvement.

Breast Tumor Cell Nuclei Segmentation in Histopathology Images using EfficientUnet++ and Multi-organ Transfer Learning

  • Dinh, Tuan Le;Kwon, Seong-Geun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1000-1011
    • /
    • 2021
  • In recent years, using Deep Learning methods to apply for medical and biomedical image analysis has seen many advancements. In clinical, using Deep Learning-based approaches for cancer image analysis is one of the key applications for cancer detection and treatment. However, the scarcity and shortage of labeling images make the task of cancer detection and analysis difficult to reach high accuracy. In 2015, the Unet model was introduced and gained much attention from researchers in the field. The success of Unet model is the ability to produce high accuracy with very few input images. Since the development of Unet, there are many variants and modifications of Unet related architecture. This paper proposes a new approach of using Unet++ with pretrained EfficientNet as backbone architecture for breast tumor cell nuclei segmentation and uses the multi-organ transfer learning approach to segment nuclei of breast tumor cells. We attempt to experiment and evaluate the performance of the network on the MonuSeg training dataset and Triple Negative Breast Cancer (TNBC) testing dataset, both are Hematoxylin and Eosin (H & E)-stained images. The results have shown that EfficientUnet++ architecture and the multi-organ transfer learning approach had outperformed other techniques and produced notable accuracy for breast tumor cell nuclei segmentation.

Development and Evaluation of D-Attention Unet Model Using 3D and Continuous Visual Context for Needle Detection in Continuous Ultrasound Images (연속 초음파영상에서의 바늘 검출을 위한 3D와 연속 영상문맥을 활용한 D-Attention Unet 모델 개발 및 평가)

  • Lee, So Hee;Kim, Jong Un;Lee, Su Yeol;Ryu, Jeong Won;Choi, Dong Hyuk;Tae, Ki Sik
    • Journal of Biomedical Engineering Research
    • /
    • v.41 no.5
    • /
    • pp.195-202
    • /
    • 2020
  • Needle detection in ultrasound images is sometimes difficult due to obstruction of fat tissues. Accurate needle detection using continuous ultrasound (CUS) images is a vital stage of treatment planning for tissue biopsy and brachytherapy. The main goal of the study is classified into two categories. First, new detection model, i.e. D-Attention Unet, is developed by combining the context information of 3D medical data and CUS images. Second, the D-Attention Unet model was compared with other models to verify its usefulness for needle detection in continuous ultrasound images. The continuous needle images taken with ultrasonic waves were converted into still images for dataset to evaluate the performance of the D-Attention Unet. The dataset was used for training and testing. Based on the results, the proposed D-Attention Unet model showed the better performance than other 3 models (Unet, D-Unet and Attention Unet), with Dice Similarity Coefficient (DSC), Recall and Precision at 71.9%, 70.6% and 73.7%, respectively. In conclusion, the D-Attention Unet model provides accurate needle detection for US-guided biopsy or brachytherapy, facilitating the clinical workflow. Especially, this kind of research is enthusiastically being performed on how to add image processing techniques to learning techniques. Thus, the proposed method is applied in this manner, it will be more effective technique than before.

Atrous Residual U-Net for Semantic Segmentation in Street Scenes based on Deep Learning (딥러닝 기반 거리 영상의 Semantic Segmentation을 위한 Atrous Residual U-Net)

  • Shin, SeokYong;Lee, SangHun;Han, HyunHo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.10
    • /
    • pp.45-52
    • /
    • 2021
  • In this paper, we proposed an Atrous Residual U-Net (AR-UNet) to improve the segmentation accuracy of semantic segmentation method based on U-Net. The U-Net is mainly used in fields such as medical image analysis, autonomous vehicles, and remote sensing images. The conventional U-Net lacks extracted features due to the small number of convolution layers in the encoder part. The extracted features are essential for classifying object categories, and if they are insufficient, it causes a problem of lowering the segmentation accuracy. Therefore, to improve this problem, we proposed the AR-UNet using residual learning and ASPP in the encoder. Residual learning improves feature extraction ability and is effective in preventing feature loss and vanishing gradient problems caused by continuous convolutions. In addition, ASPP enables additional feature extraction without reducing the resolution of the feature map. Experiments verified the effectiveness of the AR-UNet with Cityscapes dataset. The experimental results showed that the AR-UNet showed improved segmentation results compared to the conventional U-Net. In this way, AR-UNet can contribute to the advancement of many applications where accuracy is important.

Semantic Building Segmentation Using the Combination of Improved DeepResUNet and Convolutional Block Attention Module (개선된 DeepResUNet과 컨볼루션 블록 어텐션 모듈의 결합을 이용한 의미론적 건물 분할)

  • Ye, Chul-Soo;Ahn, Young-Man;Baek, Tae-Woong;Kim, Kyung-Tae
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1091-1100
    • /
    • 2022
  • As deep learning technology advances and various high-resolution remote sensing images are available, interest in using deep learning technology and remote sensing big data to detect buildings and change in urban areas is increasing significantly. In this paper, for semantic building segmentation of high-resolution remote sensing images, we propose a new building segmentation model, Convolutional Block Attention Module (CBAM)-DRUNet that uses the DeepResUNet model, which has excellent performance in building segmentation, as the basic structure, improves the residual learning unit and combines a CBAM with the basic structure. In the performance evaluation using WHU dataset and INRIA dataset, the proposed building segmentation model showed excellent performance in terms of F1 score, accuracy and recall compared to ResUNet and DeepResUNet including UNet.

Analysis of Change Detection Results by UNet++ Models According to the Characteristics of Loss Function (손실함수의 특성에 따른 UNet++ 모델에 의한 변화탐지 결과 분석)

  • Jeong, Mila;Choi, Hoseong;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.929-937
    • /
    • 2020
  • In this manuscript, the UNet++ model, which is one of the representative deep learning techniques for semantic segmentation, was used to detect changes in temporal satellite images. To analyze the learning results according to various loss functions, we evaluated the change detection results using trained UNet++ models by binary cross entropy and the Jaccard coefficient. In addition, the learning results of the deep learning model were analyzed compared to existing pixel-based change detection algorithms by using WorldView-3 images. In the experiment, it was confirmed that the performance of the deep learning model could be determined depending on the characteristics of the loss function, but it showed better results compared to the existing techniques.

Waterbody Detection Using UNet-based Sentinel-1 SAR Image: For the Seom-jin River Basin (UNet기반 Sentinel-1 SAR영상을 이용한 수체탐지: 섬진강유역 대상으로)

  • Lee, Doi;Park, Soryeon;Seo, Dongju;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.901-912
    • /
    • 2022
  • The frequency of disasters is increasing due to global climate change, and unusual heavy rains and rainy seasons are occurring in Korea. Periodic monitoring and rapid detection are important because these weather conditions can lead to drought and flooding, causing secondary damage. Although research using optical images is continuously being conducted to determine the waterbody, there is a limitation in that it is difficult to detect due to the influence of clouds in order to detect floods that accompany heavy rain. Therefore, there is a need for research using synthetic aperture radar (SAR) that can be observed regardless of day or night in all weather. In this study, using Sentinel-1 SAR images that can be collected in near-real time as open data, the UNet model among deep learning algorithms that have recently been used in various fields was applied. In previous studies, waterbody detection studies using SAR images and deep learning algorithms are being conducted, but only a small number of studies have been conducted in Korea. In this study, to determine the applicability of deep learning of SAR images, UNet and the existing algorithm thresholding method were compared, and five indices and Sentinel-2 normalized difference water index (NDWI) were evaluated. As a result of evaluating the accuracy with intersect of union (IoU), it was confirmed that UNet has high accuracy with 0.894 for UNet and 0.699 for threshold method. Through this study, the applicability of deep learning-based SAR images was confirmed, and if high-resolution SAR images and deep learning algorithms are applied, it is expected that periodic and accurate waterbody change detection will be possible in Korea.

Improving Accuracy of Instance Segmentation of Teeth

  • Jongjin Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.280-286
    • /
    • 2024
  • In this paper, layered UNet with warmup and dropout tricks was used to segment teeth instantly by using data labeled for each individual tooth and increase performance of the result. The layered UNet proposed before showed very good performance in tooth segmentation without distinguishing tooth number. To do instance segmentation of teeth, we labeled teeth CBCT data according to tooth numbering system which is devised by FDI World Dental Federation notation. Colors for labeled teeth are like AI-Hub teeth dataset. Simulation results show that layered UNet does also segment very well for each tooth distinguishing tooth number by color. Layered UNet model using warmup trick was the best with IoU values of 0.80 and 0.77 for training, validation data. To increase the performance of instance segmentation of teeth, we need more labeled data later. The results of this paper can be used to develop medical software that requires tooth recognition, such as orthodontic treatment, wisdom tooth extraction, and implant surgery.

Enhanced Lung Cancer Segmentation with Deep Supervision and Hybrid Lesion Focal Loss in Chest CT Images (흉부 CT 영상에서 심층 감독 및 하이브리드 병변 초점 손실 함수를 활용한 폐암 분할 개선)

  • Min Jin Lee;Yoon-Seon Oh;Helen Hong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.1
    • /
    • pp.11-17
    • /
    • 2024
  • Lung cancer segmentation in chest CT images is challenging due to the varying sizes of tumors and the presence of surrounding structures with similar intensity values. To address these issues, we propose a lung cancer segmentation network that incorporates deep supervision and utilizes UNet3+ as the backbone. Additionally, we propose a hybrid lesion focal loss function comprising three components: pixel-based, region-based, and shape-based, which allows us to focus on the smaller tumor regions relative to the background and consider shape information for handling ambiguous boundaries. We validate our proposed method through comparative experiments with UNet and UNet3+ and demonstrate that our proposed method achieves superior performance in terms of Dice Similarity Coefficient (DSC) for tumors of all sizes.