• Title/Summary/Keyword: Swin 모델

Search Result 12, Processing Time 0.015 seconds

Comparison of Performance of Medical Image Semantic Segmentation Model in ATLASV2.0 Data (ATLAS V2.0 데이터에서 의료영상 분할 모델 성능 비교)

  • So Yeon Woo;Yeong Hyeon Gu;Seong Joon Yoo
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.267-274
    • /
    • 2023
  • There is a problem that the size of the dataset is insufficient due to the limitation of the collection of the medical image public data, so there is a possibility that the existing studies are overfitted to the public dataset. In this paper, we compare the performance of eight (Unet, X-Net, HarDNet, SegNet, PSPNet, SwinUnet, 3D-ResU-Net, UNETR) medical image semantic segmentation models to revalidate the superiority of existing models. Anatomical Tracings of Lesions After Stroke (ATLAS) V1.2, a public dataset for stroke diagnosis, is used to compare the performance of the models and the performance of the models in ATLAS V2.0. Experimental results show that most models have similar performance in V1.2 and V2.0, but X-net and 3D-ResU-Net have higher performance in V1.2 datasets. These results can be interpreted that the models may be overfitted to V1.2.

Development of segmentation-based electric scooter parking/non-parking zone classification technology (Segmentation 기반 전동킥보드 주차/비주차 구역 분류 기술의 개발)

  • Yong-Hyeon Jo;Jin Young Choi
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.125-133
    • /
    • 2023
  • This paper proposes an AI model that determines parking and non-parking zones based on return authentication photos to address parking issues that may arise in shared electric scooter systems. In this study, we used a pre-trained Segformer_b0 model on ADE20K and fine-tuned it on tactile blocks and electric scooters to extract segmentation maps of objects related to parking and non-parking areas. We also presented a method to perform binary classification of parking and non-parking zones using the Swin model. Finally, after labeling a total of 1,689 images and fine-tuning the SegFomer model, it achieved an mAP of 81.26%, recognizing electric scooters and tactile blocks. The classification model, trained on a total of 2,817 images, achieved an accuracy of 92.11% and an F1-Score of 91.50% for classifying parking and non-parking areas.