• Title/Summary/Keyword: 딥러닝 영역분할

Search Result 60, Processing Time 0.024 seconds

Mobile App for Detecting Canine Skin Diseases Using U-Net Image Segmentation (U-Net 기반 이미지 분할 및 병변 영역 식별을 활용한 반려견 피부질환 검출 모바일 앱)

  • Bo Kyeong Kim;Jae Yeon Byun;Kyung-Ae Cha
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.4
    • /
    • pp.25-34
    • /
    • 2024
  • This paper presents the development of a mobile application that detects and identifies canine skin diseases by training a deep learning-based U-Net model to infer the presence and location of skin lesions from images. U-Net, primarily used in medical imaging for image segmentation, is effective in distinguishing specific regions of an image in a polygonal form, making it suitable for identifying lesion areas in dogs. In this study, six major canine skin diseases were defined as classes, and the U-Net model was trained to differentiate among them. The model was then implemented in a mobile app, allowing users to perform lesion analysis and prediction through simple camera shots, with the results provided directly to the user. This enables pet owners to monitor the health of their pets and obtain information that aids in early diagnosis. By providing a quick and accurate diagnostic tool for pet health management through deep learning, this study emphasizes the significance of developing an easily accessible service for home use.

Evaluating Usefulness of Deep Learning Based Left Ventricle Segmentation in Cardiac Gated Blood Pool Scan (게이트심장혈액풀검사에서 딥러닝 기반 좌심실 영역 분할방법의 유용성 평가)

  • Oh, Joo-Young;Jeong, Eui-Hwan;Lee, Joo-Young;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.45 no.2
    • /
    • pp.151-158
    • /
    • 2022
  • The Cardiac Gated Blood Pool (GBP) scintigram, a nuclear medicine imaging, calculates the left ventricular Ejection Fraction (EF) by segmenting the left ventricle from the heart. However, in order to accurately segment the substructure of the heart, specialized knowledge of cardiac anatomy is required, and depending on the expert's processing, there may be a problem in which the left ventricular EF is calculated differently. In this study, using the DeepLabV3 architecture, GBP images were trained on 93 training data with a ResNet-50 backbone. Afterwards, the trained model was applied to 23 separate test sets of GBP to evaluate the reproducibility of the region of interest and left ventricular EF. Pixel accuracy, dice coefficient, and IoU for the region of interest were 99.32±0.20, 94.65±1.45, 89.89±2.62(%) at the diastolic phase, and 99.26±0.34, 90.16±4.19, and 82.33±6.69(%) at the systolic phase, respectively. Left ventricular EF was calculated to be an average of 60.37±7.32% in the ROI set by humans and 58.68±7.22% in the ROI set by the deep learning segmentation model. (p<0.05) The automated segmentation method using deep learning presented in this study similarly predicts the average human-set ROI and left ventricular EF when a random GBP image is an input. If the automatic segmentation method is developed and applied to the functional examination method that needs to set ROI in the field of cardiac scintigram in nuclear medicine in the future, it is expected to greatly contribute to improving the efficiency and accuracy of processing and analysis by nuclear medicine specialists.

Efficient Object Recognition by Masking Semantic Pixel Difference Region of Vision Snapshot for Lightweight Embedded Systems (경량화된 임베디드 시스템에서 의미론적인 픽셀 분할 마스킹을 이용한 효율적인 영상 객체 인식 기법)

  • Yun, Heuijee;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.813-826
    • /
    • 2022
  • AI-based image processing technologies in various fields have been widely studied. However, the lighter the board, the more difficult it is to reduce the weight of image processing algorithm due to a lot of computation. In this paper, we propose a method using deep learning for object recognition algorithm in lightweight embedded boards. We can determine the area using a deep neural network architecture algorithm that processes semantic segmentation with a relatively small amount of computation. After masking the area, by using more accurate deep learning algorithm we could operate object detection with improved accuracy for efficient neural network (ENet) and You Only Look Once (YOLO) toward executing object recognition in real time for lightweighted embedded boards. This research is expected to be used for autonomous driving applications, which have to be much lighter and cheaper than the existing approaches used for object recognition.

Aerial Scene Labeling Based on Convolutional Neural Networks (Convolutional Neural Networks기반 항공영상 영역분할 및 분류)

  • Na, Jong-Pil;Hwang, Seung-Jun;Park, Seung-Je;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.6
    • /
    • pp.484-491
    • /
    • 2015
  • Aerial scene is greatly increased by the introduction and supply of the image due to the growth of digital optical imaging technology and development of the UAV. It has been used as the extraction of ground properties, classification, change detection, image fusion and mapping based on the aerial image. In particular, in the image analysis and utilization of deep learning algorithm it has shown a new paradigm to overcome the limitation of the field of pattern recognition. This paper presents the possibility to apply a more wide range and various fields through the segmentation and classification of aerial scene based on the Deep learning(ConvNet). We build 4-classes image database consists of Road, Building, Yard, Forest total 3000. Each of the classes has a certain pattern, the results with feature vector map come out differently. Our system consists of feature extraction, classification and training. Feature extraction is built up of two layers based on ConvNet. And then, it is classified by using the Multilayer perceptron and Logistic regression, the algorithm as a classification process.

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

Efficient Deep Neural Network Architecture based on Semantic Segmentation for Paved Road Detection (효율적인 비정형 도로영역 인식을 위한 Semantic segmentation 기반 심층 신경망 구조)

  • Park, Sejin;Han, Jeong Hoon;Moon, Young Shik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1437-1444
    • /
    • 2020
  • With the development of computer vision systems, many advances have been made in the fields of surveillance, biometrics, medical imaging, and autonomous driving. In the field of autonomous driving, in particular, the object detection technique using deep learning are widely used, and the paved road detection is a particularly crucial problem. Unlike the ROI detection algorithm used in general object detection, the structure of paved road in the image is heterogeneous, so the ROI-based object recognition architecture is not available. In this paper, we propose a deep neural network architecture for atypical paved road detection using Semantic segmentation network. In addition, we introduce the multi-scale semantic segmentation network, which is a network architecture specialized to the paved road detection. We demonstrate that the performance is significantly improved by the proposed method.

Development of Deep Learning Based Ensemble Land Cover Segmentation Algorithm Using Drone Aerial Images (드론 항공영상을 이용한 딥러닝 기반 앙상블 토지 피복 분할 알고리즘 개발)

  • Hae-Gwang Park;Seung-Ki Baek;Seung Hyun Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.71-80
    • /
    • 2024
  • In this study, a proposed ensemble learning technique aims to enhance the semantic segmentation performance of images captured by Unmanned Aerial Vehicles (UAVs). With the increasing use of UAVs in fields such as urban planning, there has been active development of techniques utilizing deep learning segmentation methods for land cover segmentation. The study suggests a method that utilizes prominent segmentation models, namely U-Net, DeepLabV3, and Fully Convolutional Network (FCN), to improve segmentation prediction performance. The proposed approach integrates training loss, validation accuracy, and class score of the three segmentation models to enhance overall prediction performance. The method was applied and evaluated on a land cover segmentation problem involving seven classes: buildings,roads, parking lots, fields, trees, empty spaces, and areas with unspecified labels, using images captured by UAVs. The performance of the ensemble model was evaluated by mean Intersection over Union (mIoU), and the results of comparing the proposed ensemble model with the three existing segmentation methods showed that mIoU performance was improved. Consequently, the study confirms that the proposed technique can enhance the performance of semantic segmentation models.

An Improved VTON (Virtual-Try-On) Algorithm using a Pair of Cloth and Human Image (이미지를 사용한 가상의상착용을 위한 개선된 알고리즘)

  • Minar, Matiur Rahman;Tuan, Thai Thanh;Ahn, Heejune
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.2
    • /
    • pp.11-18
    • /
    • 2020
  • Recently, a series of studies on virtual try-on (VTON) using images have been published. A comparison study analyzed representative methods, SCMM-based non-deep learning method, deep learning based VITON and CP-VITON, using costumes and user images according to the posture and body type of the person, the degree of occlusion of the clothes, and the characteristics of the clothes. In this paper, we tackle the problems observed in the best performing CP-VTON. The issues tackled are the problem of segmentation of the subject, pixel generation of un-intended area, missing warped cloth mask and the cost function used in the learning, and limited the algorithm to improve it. The results show some improvement in SSIM, and significantly in subjective evaluation.

Segmentation of Natural Fine Aggregates in Micro-CT Microstructures of Recycled Aggregates Using Unet-VGG16 (Unet-VGG16 모델을 활용한 순환골재 마이크로-CT 미세구조의 천연골재 분할)

  • Sung-Wook Hong;Deokgi Mun;Se-Yun Kim;Tong-Seok Han
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.2
    • /
    • pp.143-149
    • /
    • 2024
  • Segmentation of material phases through image analysis is essential for analyzing the microstructure of materials. Micro-CT images exhibit variations in grayscale values depending on the phases constituting the material. Phase segmentation is generally achieved by comparing the grayscale values in the images. In the case of waste concrete used as a recycled aggregate, it is challenging to distinguish between hydrated cement paste and natural aggregates, as these components exhibit similar grayscale values in micro-CT images. In this study, we propose a method for automatically separating the aggregates in concrete, in micro-CT images. Utilizing the Unet-VGG16 deep-learning network, we introduce a technique for segmenting the 2D aggregate images and stacking them to obtain 3D aggregate images. Image filtering is employed to separate aggregate particles from the selected 3D aggregate images. The performance of aggregate segmentation is validated through accuracy, precision, recall, and F1-score assessments.

Deep Image Retrieval using Attention and Semantic Segmentation Map (관심 영역 추출과 영상 분할 지도를 이용한 딥러닝 기반의 이미지 검색 기술)

  • Minjung Yoo;Eunhye Jo;Byoungjun Kim;Sunok Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.230-237
    • /
    • 2023
  • Self-driving is a key technology of the fourth industry and can be applied to various places such as cars, drones, cars, and robots. Among them, localiztion is one of the key technologies for implementing autonomous driving as a technology that identifies the location of objects or users using GPS, sensors, and maps. Locilization can be made using GPS or LIDAR, but it is very expensive and heavy equipment must be mounted, and precise location estimation is difficult for places with radio interference such as underground or tunnels. In this paper, to compensate for this, we proposes an image retrieval using attention module and image segmentation maps using color images acquired with low-cost vision cameras as an input.