• 제목/요약/키워드: Deep learning segmentation

검색결과 379건 처리시간 0.03초

Semantic Segmentation of Heterogeneous Unmanned Aerial Vehicle Datasets Using Combined Segmentation Network

  • Ahram, Song
    • 대한원격탐사학회지
    • /
    • 제39권1호
    • /
    • pp.87-97
    • /
    • 2023
  • Unmanned aerial vehicles (UAVs) can capture high-resolution imagery from a variety of viewing angles and altitudes; they are generally limited to collecting images of small scenes from larger regions. To improve the utility of UAV-appropriated datasetsfor use with deep learning applications, multiple datasets created from variousregions under different conditions are needed. To demonstrate a powerful new method for integrating heterogeneous UAV datasets, this paper applies a combined segmentation network (CSN) to share UAVid and semantic drone dataset encoding blocks to learn their general features, whereas its decoding blocks are trained separately on each dataset. Experimental results show that our CSN improves the accuracy of specific classes (e.g., cars), which currently comprise a low ratio in both datasets. From this result, it is expected that the range of UAV dataset utilization will increase.

CT 이미지 세그멘테이션을 위한 3D 의료 영상 데이터 증강 기법 (3D Medical Image Data Augmentation for CT Image Segmentation)

  • 고성현;양희규;김문성;추현승
    • 인터넷정보학회논문지
    • /
    • 제24권4호
    • /
    • pp.85-92
    • /
    • 2023
  • X-ray, Computed Tomography (CT), Magnetic Resonance Imaging (MRI)과 같은 의료데이터에서 딥러닝을 활용해 질병 유무 판별 태스크와 같은 문제를 해결하려는 시도가 활발하다. 대부분의 데이터 기반 딥러닝 문제들은 높은 정확도 달성과 정답과 비교하는 성능평가의 활용을 위해 지도학습기법을 사용해야 한다. 지도학습에는 다량의 이미지와 레이블 세트가 필요하지만, 학습에 충분한 양의 의료 이미지 데이터를 얻기는 어렵다. 다양한 데이터 증강 기법을 통해 적은 양의 의료이미지와 레이블 세트로 지도학습 기반 모델의 과소적합 문제를 극복할 수 있다. 본 연구는 딥러닝 기반 갈비뼈 골절 세그멘테이션 모델의 성능 향상과 효과적인 좌우 반전, 회전, 스케일링 등의 데이터 증강 기법을 탐색한다. 좌우 반전과 30° 회전, 60° 회전으로 증강한 데이터셋은 모델 성능 향상에 기여하지만, 90° 회전 및 ⨯0.5 스케일링은 모델 성능을 저하한다. 이는 데이터셋 및 태스크에 따라 적절한 데이터 증강 기법의 사용이 필요함을 나타낸다.

딥러닝 기반 실내 디자인 인식 (Deep Learning-based Interior Design Recognition)

  • 이원규;박지훈;이종혁;정희철
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.47-55
    • /
    • 2024
  • We spend a lot of time in indoor space, and the space has a huge impact on our lives. Interior design plays a significant role to make an indoor space attractive and functional. However, it should consider a lot of complex elements such as color, pattern, and material etc. With the increasing demand for interior design, there is a growing need for technologies that analyze these design elements accurately and efficiently. To address this need, this study suggests a deep learning-based design analysis system. The proposed system consists of a semantic segmentation model that classifies spatial components and an image classification model that classifies attributes such as color, pattern, and material from the segmented components. Semantic segmentation model was trained using a dataset of 30000 personal indoor interior images collected for research, and during inference, the model separate the input image pixel into 34 categories. And experiments were conducted with various backbones in order to obtain the optimal performance of the deep learning model for the collected interior dataset. Finally, the model achieved good performance of 89.05% and 0.5768 in terms of accuracy and mean intersection over union (mIoU). In classification part convolutional neural network (CNN) model which has recorded high performance in other image recognition tasks was used. To improve the performance of the classification model we suggests an approach that how to handle data that has data imbalance and vulnerable to light intensity. Using our methods, we achieve satisfactory results in classifying interior design component attributes. In this paper, we propose indoor space design analysis system that automatically analyzes and classifies the attributes of indoor images using a deep learning-based model. This analysis system, used as a core module in the A.I interior recommendation service, can help users pursuing self-interior design to complete their designs more easily and efficiently.

효율적인 비정형 도로영역 인식을 위한 Semantic segmentation 기반 심층 신경망 구조 (Efficient Deep Neural Network Architecture based on Semantic Segmentation for Paved Road Detection)

  • 박세진;한정훈;문영식
    • 한국정보통신학회논문지
    • /
    • 제24권11호
    • /
    • pp.1437-1444
    • /
    • 2020
  • 컴퓨터 비전 시스템의 발달로 보안, 생체인식, 의료영상, 자율주행 등의 분야에 많은 발전이 있었다. 자율주행 분야에서는 특히 딥러닝을 이용한 객체인식, 탐지 기법이 주로 사용되는데, 자동차가 갈 수 있는 영역을 판단하기 위한 도로영역 인식이 특히 중요한 문제이다. 도로 영역은 일반적인 객체탐지에서 활용되는 사각영역인식과는 달리 비정형적인 형태를 띠므로, ROI 기반의 객체인식 구조는 적용할 수 없다. 본 논문에서는 Semantic segmentation 기법을 사용한 비정형적인 도로영역 인식에 맞는 심층 신경망 구조를 제안한다. 또한 도로영역에 특화된 네트워크 구조인 Multi-scale semantic segmentation 기법을 사용하여 성능이 개선됨을 입증하였다.

Synthetic Computed Tomography Generation while Preserving Metallic Markers for Three-Dimensional Intracavitary Radiotherapy: Preliminary Study

  • Jin, Hyeongmin;Kang, Seonghee;Kang, Hyun-Cheol;Choi, Chang Heon
    • 한국의학물리학회지:의학물리
    • /
    • 제32권4호
    • /
    • pp.172-178
    • /
    • 2021
  • Purpose: This study aimed to develop a deep learning architecture combining two task models to generate synthetic computed tomography (sCT) images from low-tesla magnetic resonance (MR) images to improve metallic marker visibility. Methods: Twenty-three patients with cervical cancer treated with intracavitary radiotherapy (ICR) were retrospectively enrolled, and images were acquired using both a computed tomography (CT) scanner and a low-tesla MR machine. The CT images were aligned to the corresponding MR images using a deformable registration, and the metallic dummy source markers were delineated using threshold-based segmentation followed by manual modification. The deformed CT (dCT), MR, and segmentation mask pairs were used for training and testing. The sCT generation model has a cascaded three-dimensional (3D) U-Net-based architecture that converts MR images to CT images and segments the metallic marker. The performance of the model was evaluated with intensity-based comparison metrics. Results: The proposed model with segmentation loss outperformed the 3D U-Net in terms of errors between the sCT and dCT. The structural similarity score difference was not significant. Conclusions: Our study shows the two-task-based deep learning models for generating the sCT images using low-tesla MR images for 3D ICR. This approach will be useful to the MR-only workflow in high-dose-rate brachytherapy.

아리랑 5호 위성 영상에서 수계의 의미론적 분할을 위한 딥러닝 모델의 비교 연구 (Comparative Study of Deep Learning Model for Semantic Segmentation of Water System in SAR Images of KOMPSAT-5)

  • 김민지;김승규;이도훈;감진규
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.206-214
    • /
    • 2022
  • The way to measure the extent of damage from floods and droughts is to identify changes in the extent of water systems. In order to effectively grasp this at a glance, satellite images are used. KOMPSAT-5 uses Synthetic Aperture Radar (SAR) to capture images regardless of weather conditions such as clouds and rain. In this paper, various deep learning models are applied to perform semantic segmentation of the water system in this SAR image and the performance is compared. The models used are U-net, V-Net, U2-Net, UNet 3+, PSPNet, Deeplab-V3, Deeplab-V3+ and PAN. In addition, performance comparison was performed when the data was augmented by applying elastic deformation to the existing SAR image dataset. As a result, without data augmentation, U-Net was the best with IoU of 97.25% and pixel accuracy of 98.53%. In case of data augmentation, Deeplab-V3 showed IoU of 95.15% and V-Net showed the best pixel accuracy of 96.86%.

A Novel Whale Optimized TGV-FCMS Segmentation with Modified LSTM Classification for Endometrium Cancer Prediction

  • T. Satya Kiranmai;P.V.Lakshmi
    • International Journal of Computer Science & Network Security
    • /
    • 제23권5호
    • /
    • pp.53-64
    • /
    • 2023
  • Early detection of endometrial carcinoma in uterus is essential for effective treatment. Endometrial carcinoma is the worst kind of endometrium cancer among the others since it is considerably more likely to affect the additional parts of the body if not detected and treated early. Non-invasive medical computer vision, also known as medical image processing, is becoming increasingly essential in the clinical diagnosis of various diseases. Such techniques provide a tool for automatic image processing, allowing for an accurate and timely assessment of the lesion. One of the most difficult aspects of developing an effective automatic categorization system is the absence of huge datasets. Using image processing and deep learning, this article presented an artificial endometrium cancer diagnosis system. The processes in this study include gathering a dermoscopy images from the database, preprocessing, segmentation using hybrid Fuzzy C-Means (FCM) and optimizing the weights using the Whale Optimization Algorithm (WOA). The characteristics of the damaged endometrium cells are retrieved using the feature extraction approach after the Magnetic Resonance pictures have been segmented. The collected characteristics are classified using a deep learning-based methodology called Long Short-Term Memory (LSTM) and Bi-directional LSTM classifiers. After using the publicly accessible data set, suggested classifiers obtain an accuracy of 97% and segmentation accuracy of 93%.

Deep learning approach to generate 3D civil infrastructure models using drone images

  • Kwon, Ji-Hye;Khudoyarov, Shekhroz;Kim, Namgyu;Heo, Jun-Haeng
    • Smart Structures and Systems
    • /
    • 제30권5호
    • /
    • pp.501-511
    • /
    • 2022
  • Three-dimensional (3D) models have become crucial for improving civil infrastructure analysis, and they can be used for various purposes such as damage detection, risk estimation, resolving potential safety issues, alarm detection, and structural health monitoring. 3D point cloud data is used not only to make visual models but also to analyze the states of structures and to monitor them using semantic data. This study proposes automating the generation of high-quality 3D point cloud data and removing noise using deep learning algorithms. In this study, large-format aerial images of civilian infrastructure, such as cut slopes and dams, which were captured by drones, were used to develop a workflow for automatically generating a 3D point cloud model. Through image cropping, downscaling/upscaling, semantic segmentation, generation of segmentation masks, and implementation of region extraction algorithms, the generation of the point cloud was automated. Compared with the method wherein the point cloud model is generated from raw images, our method could effectively improve the quality of the model, remove noise, and reduce the processing time. The results showed that the size of the 3D point cloud model created using the proposed method was significantly reduced; the number of points was reduced by 20-50%, and distant points were recognized as noise. This method can be applied to the automatic generation of high-quality 3D point cloud models of civil infrastructures using aerial imagery.

딥 러닝 기반의 영상분할 알고리즘을 이용한 의료영상 3차원 시각화에 관한 연구 (Three-Dimensional Visualization of Medical Image using Image Segmentation Algorithm based on Deep Learning)

  • 임상헌;김영재;김광기
    • 한국멀티미디어학회논문지
    • /
    • 제23권3호
    • /
    • pp.468-475
    • /
    • 2020
  • In this paper, we proposed a three-dimensional visualization system for medical images in augmented reality based on deep learning. In the proposed system, the artificial neural network model performed fully automatic segmentation of the region of lung and pulmonary nodule from chest CT images. After applying the three-dimensional volume rendering method to the segmented images, it was visualized in augmented reality devices. As a result of the experiment, when nodules were present in the region of lung, it could be easily distinguished with the naked eye. Also, the location and shape of the lesions were intuitively confirmed. The evaluation was accomplished by comparing automated segmentation results of the test dataset to the manual segmented image. Through the evaluation of the segmentation model, we obtained the region of lung DSC (Dice Similarity Coefficient) of 98.77%, precision of 98.45%, recall of 99.10%. And the region of pulmonary nodule DSC of 91.88%, precision of 93.05%, recall of 90.94%. If this proposed system will be applied in medical fields such as medical practice and medical education, it is expected that it can contribute to custom organ modeling, lesion analysis, and surgical education and training of patients.

후두 내시경 영상에서의 성문 분할 및 성대 점막 형태의 정량적 평가 (Segmentation of the Glottis and Quantitative Measurement of the Vocal Cord Mucosal Morphology in the Laryngoscopic Image)

  • 이선민;오석;김영재;우주현;김광기
    • 한국멀티미디어학회논문지
    • /
    • 제25권5호
    • /
    • pp.661-669
    • /
    • 2022
  • The purpose of this study is to compare and analyze Deep Learning (DL) and Digital Image Processing (DIP) techniques using the results of the glottis segmentation of the two methods followed by the quantification of the asymmetric degree of the vocal cord mucosa. The data consists of 40 normal and abnormal images. The DL model is based on Deeplab V3 architecture, and the Canny edge detector algorithm and morphological operations are used for the DIP technique. According to the segmentation results, the average accuracy of the DL model and the DIP was 97.5% and 94.7% respectively. The quantification results showed high correlation coefficients for both the DL experiment (r=0.8512, p<0.0001) and the DIP experiment (r=0.7784, p<0.0001). In the conclusion, the DL model showed relatively higher segmentation accuracy than the DIP. In this paper, we propose the clinical applicability of this technique applying the segmentation and asymmetric quantification algorithm to the glottal area in the laryngoscopic images.